TLDR: Efforts in modelling imagination and intuition into AI solutions are at best approximations or simulations of what they are trying to capture. In that, they represent objectives that the left hemisphere of our brain leans towards, with the shortcoming that the left hemisphere does not understand the limits of its thinking and reach (according to Iain McGilchrist’s hemisphere hypothesis). This stance regularly leads to mistaking the map for the territory - in the context of AI, that tends to manifest as overlooking the embodied aspects of knowing. Read on.
Intuition may help us grasp what intelligence fails to provide. - Henri Bergson
In part one, I explored the notion of imagination and why generative AI, more precisely LLMs, falls short of emulating human imagination. Intuition is the other ‘I’ that AI will be struggling with.
The dictionary says that an intuition is “a thing that one knows or considers likely from instinctive feeling rather than conscious reasoning”.
Intuitions are not infallible, but according to philosophers like Iain McGilchrist, it is the technical-empirical, left hemisphere dominated world views that have strongly undermined intuitions’ value in making sense of the world. Intuitions have been considered antithetical to rational analysis. McGilchrist calls for a middle ground between imagination, intuition, and reason:
We should never dismiss reason, but then we should never just dismiss intuition, either. We need them to work together, since reason guided by intuition is better than reason that disregards it; and intuition that is not dismissive of reason is better than intuition that is. We need as many ways of getting hold of reality as we can. -Iain McGilchrist, The Matter with Things, p. 722.
Intuition as the inexpressible beyond intelligence
The philosopher Henri Bergson talked about intuition as ‘intellectual sympathy’, as something inexpressible that evades purely intellectual thinking. He put analysis — something that the transformers within LLMs are doing — in opposition to intuition because unlike intuition, analysis reduces things to what is already known. “[O]ne can move from intuition to analysis but not from analysis to intuition”, Bergson observed.
Psychologist James Hillman cites the poet Ralph Waldo Emerson and how Emerson contrasted intuition to _tuition_, concluding that while intuition can’t be formally learned, one needs both tuition and intuition to act with practical wisdom in the world. I love Hillman’s notion of ‘mythical sensibility’ and how that is included in intuition, “for when a myth strikes us, it seems true and gives sudden insight.” (The Soul’s Code, p. 97.) _
It seems, then, that intuition requires a continuous presence in the world so that the sudden insight can appear from accumulated threads of perception and emotion.
In terms of Iain McGilchrist’s hemisphere hypothesis, intuition belongs to the right hemisphere’s stance to the world, i.e. intuition reaches beyond language and towards metaphors and embodied forms of knowing. Whereas the left hemisphere’s stance to the world is one of categorising and labelling, the right hemisphere places the narrow, grasping focus towards reality of its lateral counterpart into context, into wholes.
There is a school of thought, based on biology and physics, that everything that happens in the universe is deterministic, i.e. following predetermined patterns and paths (e.g. Robert Sapolsky, Determined. Life without free will). According to this view, intuitions are considered the moments when all the deterministic threads leading up to that moment become clear — hence, the moment of intuitive clarity or discovery. The idea is that the deterministic threads are so complex that our rational thought cannot process them — yet in the intuitive moment they reveal themselves in a snapshot.
Henri Bergson echoed this in suggesting that ‘intuition may bring the intellect to recognise that life does not quite fit into categories’. This leads the author Michael Foley to summarise that for Bergson, ‘Intuition is the personal experience of unity’ where multiple strains of hereditary thought combine momentarily into one.
Intuitions speak to embodiment and mortality
Furthermore, Foley observes that ‘intuition is the ability to identify with others and the world.’ Surely, this is where the current wishful thinking about AGI, artificial general intelligence or ‘super-intelligence’ finds its limits.
From his reading of Bergson on intuition, Foley concludes that
The lesson is that intuition lies somewhere in the middle of a continuum with instinct at one extreme and rational response at the other; it is instinct trained by intelligence - or intelligence guided by instinct. Both extreme reactions are suspect. - in Michael Foley, Lessons from Bergson, p.35.
I will let you make your conclusions about where LLMs land on that continuum — I’d say they miss the mark altogether. The main reason for this is the lack of embodiment, i.e. the experience of being in the world and the uncertainty that one’s knowledge of mortality brings to everyday human existence.
This might appear as quite a leap, jumping from discussing LLMs to contemplating about mortality, but there is evidence that human meaning-making, when practised beyond intellect, draws from our physical finitude.
Intuitions evolve in our movement and engagement with the world and with one another, not in detachment and stasis. Common sense is the ultimate embodied skill that is acquired effortlessly through experience, and, to be effective, it needs to be protected from the gaze of analysis. -Iain McGilchrist, The Matter with Things, p 746.
LLMs operate on a finite quantity of training data, unaware of its finitude, and therefore cannot break free from the left hemisphere stance to the data, their ‘world’.
Henri Bergson has described intuitions as inhabiting a subject rather than circling it like rational analysis does. The circling from afar implies an objective ‘view from nowhere’ where the witnessing subject — such as a scientist making observations — with all their biases, self-deceptions, and perceptual limitations does not exist.
In contrast, there are other ways of knowing, such as type of indigenous knowledge that Tyson Yunkaporta writes about:
Scientists currently have to remove all traces of themselves from experiments, otherwise their data is considered to be contaminated. Contaminated with what? With the filthy reality of belongingness? The toxic realisation that if we can’t stand outside of a field we can’t own it? I don’t see science embracing Indigenous methods of inquiry any time soon, as Indigenous Knowledge is not wanted at the level of how, only at the level of what, a resource to be plundered rather than a source of knowledge processes. - Tyson Yunkaporta, Sand Talk: How Indigenous Knowledge Can Save the World, p 49.
The ‘belongingness’ Yunkaporta writes about equals inhabiting the subject rather than constructing an objective viewpoint to it. If anyone out there is training AI with indigenous resources, while being conscious of all the shortcomings of current AI solutions, please let me know.
Intuition is a threat to technology
The object of intuition can just as well be something intellectual. It can be an insight that one then sets out to validate empirically with data — but here I want to focus on intuitions about the mysterious and elusive aspects of nature and reality.
Debunking intuitions’ worth is, I suggest, often a misogynistic manoeuvre. Intuition has been associated with female behaviour. Adopting a pejorative stance to intuition is a symptom of centuries of patriarchal power. If one subscribes to such an ideology, Intuition can be considered a threat to the rationality of technology and power:
Intuition is also a threat to a world-picture based on administration, adherence to ordained procedures, the power of technology, and a belief in the superiority of abstract mentation over embodied being. And to the reductionist, the power of intuition is also a threat that must be ‘debunked’. - The Matter with Things, p. 723-4
We — historically typically men — are creating increasingly sophisticated technologies that manipulate nature, and go on to destroy it, as it turns out. These technologies seemingly do the same for reality, but for me, they in so doing also risk taking us even farther from grasping reality and consequently clouding our intuitions.
McGilchrist’s call for a middle ground between reason, imagination, and intuition that I quoted in the beginning arises from similar concern, a concern that we have lost a meaningful grip of reality and what the virtues of human spirit, such as true, good, and beautiful mean.
What do we lose if current AI are unable to draw from intuition or imagination?
Maybe AI does not need to have genuine imagination nor intuition, as long as we understand that AI attempts at something similar are not equal to human imagination and intuition. Developing AI as an assistant to human capabilities is in any case illustrative of the attention this technology deserves.
Yet, I believe we are in danger of losing multiple things.
These losses start with the models’ lack of embodiment that limits any awareness of being in the world. Listening to a World Economic Forum panel on generative AI, I was struck by the fact that AI entrepreneurs seem to regard robotics as a form of embodiment. From that standpoint, it takes only a small step into transhumanist fever dreams about disembodiment, uploading our consciousness to the cloud, and so on. I suggest we reserve the notion of embodiment for beings with metabolism — a quality that I also regard as a prerequisite for consciousness, in line with idealist world views put forward by philosophers such as Bernardo Kastrup.
Finally, I wish to draw your attention to early 20th century philosopher Max Scheler’s views on intuition. According to Scheler, we apprehend values through a unique faculty called "emotional intuition" or "value-ception" . This emotional intuition allows us to grasp values directly, without the need for rational inference or calculation. Again, how would a set of algorithms intuit anything, let alone something as complex as values?
Thank you for reading. As always, I leave you with a piece of contemplative algorithmic art — a combination of human imagination and computational algorithms:
With love and kindness,
Aki