What if the more pertinent question about AI is not how robust it can be in terms of intelligence, but rather if it can ever intuit or imagine?
The tree which moves some to tears of joy is in the eyes of others only a green thing that stands in the way. Some see nature all ridicule and deformity... and some scarce see nature at all. But to the eyes of the man of imagination, nature is imagination itself. -William Blake
When I write this, generative AI tools and the large language models (LLMs) that power them have taken over the world like a storm. However, even a superficial understanding of how the algorithms work quickly reveals the limits of their approach. LLMs find the most likely next word, or pixel, for example, as a response to the prompt as a set of parameters. They are a) operating in the confines of syntaxes such as language and b) dependent on how (in)comprehensive the training data has been.
Thought versus language
The first problem with this is that studies indicate that human thought precedes language. We might conceive our internal monologue, the chatter inside our head, in terms of language — hence ‘chatter’ — but that is already one degree of abstraction removed from what is going on, according to neuroscience.
This means that with language-based AI models we are in the realm of the left hemisphere of the brain, according to Iain McGilchrist’s hemisphere hypothesis. It is the hemisphere that focuses on details, breaking things apart, labelling things as static objects, seeing things out of context. Therefore, the left hemisphere tends not to place phenomena in broader contexts; it does not understand the whole as something more than the sum of its parts. The latter is where the right hemisphere comes in and completes the meaning-making process.
Our ability to use language is also down to the left hemisphere (we know this from the symptoms of patients with left hemisphere lesions or stroke). Now, it turns out that there is a recently developed “non-invasive decoder that reconstructs continuous language” from the brain and that this data “can be separately decoded from multiple regions” of the brain — however, all this required substantial cooperation from the test subjects. These solutions are far from spontaneous mind-reading devices.
Thoughts become intelligible in embodied forms
Yet, they do raise the question of mind as a self-contained entity. Leo Kim discusses this in an article in Wired, referring to philosopher and logician Saul Kripke’s argument for expanding the notion of thoughts as ‘private language’ to something that gains meaning only when connected with the external world. “For Kripke, any thought that was private and only accessible to the thinker would ultimately become unintelligible, even to the thinker themself.”
According to this view, thoughts become intelligible only in interactions with the external world. This points to the importance of our embodied being, such as when we have to verbalise thoughts into language in conversation with others. I’m all for that, yet it appears that the insistence on intelligibility and language is a left-hemisphere criteria. It seeks to establish a grip on details — words and sentences — and not the bigger picture, the gestalt or overall ‘shape’ of meaning (the right hemisphere supports that).
Consequently, if we have a generative AI tool that primarily operates on the level of written language, or one inferred from decoded brain recordings, it presumably privileges the left hemisphere’s take on the world. The tool focuses on labelling things, categorising, and paying attention to the individual parts rather than the whole. Such tools cannot leverage concepts that are not put into words or other languages of intellect, such as formulas, equations, or code.
Unexamined Technology is a reader-supported resource to re-examining our relationship to technology. The best way to support my work is by becoming a paid subscriber and sharing it with others. Thank you! -Aki
Imagination’s laboratory
This brings us to imagination. Psychiatrist Neel Burton has defined imagination as follows:
I define imagination as the faculty of the mind that forms and manipulates images, propositions, concepts, emotions, and sensations above and beyond, and sometimes independently, of incoming stimuli, to open up the realms of the abstract, the figurative, the possible, the hypothetical, and the paradigmatic or universal. - The Psychology and Philosophy of Imagination
If I say that ChatGPT, for example, is incapable of imagination, I can instantly hear the counterarguments: the fantastic images generated, the insightful answers to prompts, even the ‘hallucinations’ as imagining things.
It’s not that the notion of imagination has not figured in discussions about technology. For example, in his book What algorithms want. Imagination in the age of computing, Ed Finn writes extensively about ‘algorithmic imagination’ but focuses — in my reading — more on what ‘algorithmic’ is rather than what ‘imagination’ is.
Therefore, we need to go to the philosophers and the poets. In Biographia Literaria, the poet Samuel Taylor Coleridge wrote about imagination as “the laboratory in which thought elaborates essence into existence”.
Philosophy professor Robert Kearney’s synthesis echoes Coleridge’s notion:
The plurality of terms for imagination […] - yetser, phantasia, eikasia, imaginatio, Einbildungskraft, fantasy, imagination - have at least one basic trait in common: they all refer, in their diverse ways, to the human power to convert absence into presence, actuality into possibility, what-is into something-other-than-it-is. - Richard Kearney, Poetics of imagining, p.4.
Elaborating essence into existence, isn’t that what generative AI tools are doing?
Well, only if you take the metaphor literally, as the left hemisphere does. Firstly, there’s the thing about thought preceding language. There is no thought in play in how the transformer algorithms work in LLMs.
I read Coleridge’s essence, the ‘raw’ material for thought, referring to something pre-linguistic. McGilchrist cites studies of mathematicians and how the best of them cannot imagine (pun intended) working on hard problems without mental imagery that comes before anything is put down in formal language or syntax:
creative imagination neither 'just' sees nor 'just' creates, but brings the new into existence through the combination of both, so rendering the authorship of what emerges ambiguous. And this is how we bring all our world into being: all human reality is an act of co-creation. It's not that we make the world up; we respond more or less adequately to something greater than we are. The world emerges from this dipole. We half perceive, half create. -Iain McGilchrist, The Matter with Things, p 765.
Second, Coleridge made the distinction between imagination and ‘fancy’. The latter is about dressing up something that already exists into a fantastic new appearance. I grant DallE and friends can do that. Fancy — no more, no less.
However, in “the laboratory in which thought elaborates essence into existence” Coleridge emphasised the potential of something coming into being, that the essence is to be discovered and recognised and attributed with meaning. Interpreting Coleridge, McGilchrist talks about imagination as an act of unveiling the world from the familiarity we live in the everyday.
The action of Imagination, by contrast, is seen by Coleridge as the soul that is diffused throughout whatever it informs, 'every where and in each; and forms all into one graceful and intelligent whole’. It is not added on top of reality, but brings reality into being as it were from within. Its result is not a chain of association, one thing added to the next, but a single seamless process: not a mixture or combination, either, but a compound, in which the parts are no longer separate but integrated into a new whole. -Matter with Things p. 770-1; Coleridge’s Biographia Literaria - emphasis by Aki.
My emphasis above points to the differences between the workings of human imagination and the ‘attention mechanisms’ of LLM transformer algorithms that add one thing (word, pixel, etc) after the next.
Imagination as the way to truth
Imagination is not just for artistic purposes. Albert Einstein attributed imagination a key role in scientific research — famously he reached many of his insights via thought experiments, i.e. results of bridging reasoning with imagining.
Thus, imagination is about using one’s attention to find something that persists beyond the things we often find ourselves conforming to in everyday life. The French philosopher Henri Bergson believed that imagination played a crucial role in grasping reality and accessing deeper truths. McGilchrist summarises:
Imagination is far from certain, of course; but the biggest mistake we could make would be never to trust it - never to believe in it - for fear of being mistaken. For truth requires imagination. It alone can put us in touch with aspects of reality to which our habits of thought have rendered us blind. It leads not to an escape from reality, but a sudden seeing into its depths, so that reality is for the first time truly present, with all its import, whether that occur in the context of what we call science or what we call art. - The Matter with Things, p 768.
Coleridge also discussed primary imagination versus secondary imagination. He saw primary imagination as ‘nature’s imagination’ and how we perceive the world as it is, spontaneously and without intentions (as in Blake’s poem that I started with).
We employ secondary imagination when we create a new meaning or an object. For Coleridge, nature’s sounds are the primary music, and man’s compositions count as secondary music, drawing from the primary source.
At best, then generative AI tools produce something of a tertiary imagination: one degree removed from its building blocks, i.e. the training data, and two steps removed from perception and nature. And as long as there is a prompt involved, they do it with an intention, using algorithms with the intention to produce an output to the prompt.
Why is nature important here, I hear you ask. The answer is that nature is what is ultimately real, if we unveil it of all the manipulation that humans with technologies have exerted upon it. We put hope in technologies, but we trust nature, as W. Brian Arthur has put it. McGilchrist elaborates on this:
My contention is that imagination, far from deceiving us, is the only means whereby we experience reality: it is the place where our individual creative consciousness meets the creative cosmos as a whole. […] It is the virtual, re-presented world of the left hemisphere that is the deceit. Imagination is not, as it is sometimes conceived, the capacity to conjure the unreal, but, for the first time, to see the real - the real that is, for reasons of deeply ingrained habit, no longer present to us. - The Matter with Things, p 774
Surely, this is where the current AI tools find their limits.
Even if AI could imagine in the sense of reaching through it a fuller realisation of the world, the risk is that it imagines according to its creators’ world views and biases. In contrast, diverse imaginations might be the way to heal ourselves and the planet, as indigenous neuroscientist Araceli Camargo suggests:
Imagination is important. It allows our minds to build new cognitive architecture, from which to form socio-cultural decisions. But we cannot keep upholding knowledge supremacy: it has created a homogenous, discriminating and dominating knowledge infrastructure that decides which knowledges are valued, listened to and acknowledged - and which are not. We need to embrace imaginations in order to heal - from all peoples, from all living beings. We need Indigenous futures in our imaginations. -Araceli Camargo, “Nature as health” in This Book is a Plant
The other I: intuition
Intuition is the second ‘I’ the discussion about AI needs to pay attention to. Bergson talked about intuition as ‘intellectual sympathy’, as something inexpressible that evades purely intellectual thinking - and therefore language. This will be the topic of the next post - please read on: part 2: Intuition.
Thank you for reading. As ever, I leave you with a contemplative piece of algorithmic art:
With love and kindness,
Aki
LLMs have sufficiently large enough collections of tokens and parameters to stitch together impressive creations at runtime that inspire and entertain.
As a Christian, I personally find them useful spark plugs for firing up the God-given (empowering and guiding) human faculty of imagination, as described in your article.
But that they have inherent imagination is at best unprovable, and at worst misleading:
https://open.substack.com/pub/garymarcus/p/muddles-about-models?r=3er6yo&utm_campaign=post&utm_medium=web
https://twitter.com/ylecun/status/1610633906738298880?lang=en
https://mindmatters.ai/2020/02/unexplainability-and-incomprehensibility-of-ai/
Thanks you for sharing this! Very nice to see that others think along the same lines (Ref. https://tmfow.substack.com/p/artificial-intelligence-and-living/comment/52336533?utm_source=activity_item)