Twenty-five years ago, the burgeoning science of consciousness studies was rife with promise. With cutting-edge neuroimaging tools leading to new research programmes, the neuroscientist Christof Koch was so optimistic, he bet a case of wine that we’d uncover its secrets by now. The philosopher David Chalmers had serious doubts, because consciousness research is, to put it mildly, difficult. Even what Chalmers called the easy problem of consciousness is hard, and that’s what the bet was about – whether we would uncover the neural structures involved in conscious experience. So, he took the bet.
This summer, with much fanfare and media attention, Koch handed Chalmers a case of wine in front of an audience of 800 scholars. The science journal Nature kept score: philosopher 1, neuroscientist 0. What went wrong? It isn’t that the past 25 years of consciousness studies haven’t been productive. The field has been incredibly rich, with discoveries and applications that seem one step from science fiction. The problem is that, even with all these discoveries, we still haven’t identified any neural correlates of consciousness. That’s why Koch lost the bet.
If the easy problem is this hard, what does that make the ‘hard problem’? Chalmers described the hard problem of consciousness as understanding why material beings like us have experience at all. Solving the hard problem would give us a secure theory of consciousness that explains the nature of conscious experience. Philosophers and scientists alike want to solve the hard problem, and to do so many are focusing on the easy problem. But all that attention is making the hard problem harder than it needs to be.
We might enjoy a hard puzzle but abhor a puzzle with pieces missing. Today’s consciousness science has more pieces than it did 25 years ago. But there is reason to think that key pieces are still missing, turning an intellectual puzzle into an intractable problem. To see why, we have to revisit the assumptions that launched the field of consciousness research.
Only eight years before Koch and Chalmers made their bet, there wasn’t exactly a unified field of consciousness studies. A few scientists advocated studying animal consciousness, and while there was research on blindsight, amnesia and people with split-brains, these research programmes were largely independent from one another. Calls to study consciousness from within some sciences were met with scepticism and derision. For example, the ethologist Donald Griffin wrote four books advocating for the study of animal consciousness, starting with The Question of Animal Awareness (1976). Though Griffin was a highly respected scientist who had co-discovered echolocation in bats, he didn’t have much success promoting the study of consciousness in his field. Students were warned away from the topic, with one comparative cognition textbook deriding attention to animal consciousness, since ‘It seems positively foolhardy for an animal psychologist to blunder in where even philosophers fear to tread.’ For many, consciousness was a taboo subject, much like other fanciful questions about artificial intelligence, psychedelics or alien life (all of which are also enjoying scientific attention these days, interestingly enough).
Arguably, it was Koch who helped turn consciousness studies into a real science with the publication of ‘Towards a Neurobiological Theory of Consciousness (1990). This paper was coauthored with Francis Crick, who comes with about as much scientific prestige as you could ask for – after all, Crick won the Nobel Prize in 1962 for his role in the discovery of the structure of DNA. The Crick and Koch manifesto had an enormous impact on the development of this new science, setting the stage for how it should proceed:
We shall assume that some species of animals, and in particular the higher mammals, possess some of the essential features of consciousness, but not necessarily all. For this reason, appropriate experiments on such animals may be relevant to finding the mechanisms underlying consciousness … We consider that it is not profitable at this stage to argue about whether ‘lower’ animals, such as octopus, Drosophila or nematodes, are conscious. It is probable, though, that consciousness correlates to some extent with the degree of complexity of any nervous system.
By supposing that ‘higher mammals’ possess some essential features of consciousness, Crick and Koch took up Griffin’s call to study consciousness in animals. By taking this courageous approach, Crick and Koch put aside the still-common Cartesian view that language is needed for conscious experience:
[A] language system (of the type found in humans) is not essential for consciousness. That is, one can have the key features of consciousness without language. This is not to say that language may not enrich consciousness considerably.
By rejecting the language-centrism of the day, Crick and Koch were giving scientists more puzzle pieces to work with. Specifically, they suggested that scientists focus on a capacity that humans share with ‘higher animals’ – vision. The reasons they give for this choice are pragmatic, but also explicitly anthropocentric and theory driven:
At this point we propose to make a somewhat arbitrary personal choice. Since we hypothesise that there is a basic mechanism for consciousness that is rather similar in different parts of the brain (and, in particular, in different parts of the neocortex), we propose that the visual system is the most favourable for an initial experimental approach … Unlike language, it is fairly similar in man and the higher primates. There is already much experimental work on it, both by psychophysicists and by neuroscientists. Moreover we believe it will be easier to study than the more complex aspects of consciousness associated with self-awareness.
Reading the Crick and Koch manifesto today is almost eerie, given how well it predicted the next 33 years of consciousness studies with its focus on vision in mammals. In jumpstarting the field of consciousness studies, Crick and Koch designated the range of acceptable research subjects and research questions. Their idea was that we can’t search for consciousness without relying on consciousness as we know it, and consciousness as we know it is human consciousness. The so-called ‘higher mammals’ are animals like us, social primates who rely heavily on vision to engage with the world.
Now it isn’t language that is presumed necessary for consciousness, but a nervous system
What had been set aside is that animals quite unlike us also use vision. The so-called ‘lower mammals’ also have eyes, since all mammals do. The same goes for birds and most reptiles and fish, with only some blind cave fish who lost the ability. But it isn’t just in these familiar species where we find eyes. The box jellyfish has 24 eyes, with four different types specialised for different tasks. Scallops have around 200 eyes of the same type, which include pupils that can dilate and two retinas. When the study of consciousness is grounded in the study of human-like vision, it makes the field of consciousness studies unapologetically anthropocentric, discounting animal models that might be key puzzle pieces.
More importantly, it also makes the field conspicuously neurocentric. By including only ‘higher mammals’ in the study of consciousness, Crick and Koch replaced the language-centric views of consciousness with a neurocentric one. Now it isn’t language that is presumed necessary for consciousness, but a nervous system. The theory behind Crick and Koch’s proposal presumes that there are similar neural mechanisms for consciousness across different regions of the human brain and, since some animals have neural systems that are similar to some of our neural systems, we can study the brains of those animals – animals like us. If we are committed to the idea that complex brains are needed for consciousness, we wouldn’t profitably study scallops, who don’t even have a brain, or jellyfish, who have a small net of approximately 10,000 neurons. The Chalmers-Koch bet was framed within this commitment, which is why it was over whether the science would discover the neural correlates of consciousness.
While the last decades of research under this approach failed to support a particular theory of consciousness, the neuroscientific research did pay off in a very different, and surprising, respect – it was used to identify other conscious animals.
In 2012, scientists held a conference memorialising the research of Crick, who had died eight years earlier. Here they publicly proclaimed the Cambridge Declaration on Consciousness, stating that there is sufficient evidence to conclude that ‘all mammals and birds, and many other creatures, including octopuses’ experience conscious states, and that:
The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviours.
The Declaration uses the term ‘substrates of conscious states’, which implies that there have been established discoveries about the source of consciousness – an answer to the easy problem. But, as the outcome of the bet made clear, we don’t have a secure theory. Instead, the Declaration identifies new consciousness markers, features that offer evidence that the system is conscious.
In everyday life, humans rely on markers such as goal-oriented behaviour, communicative interaction and emotional expression when we see other humans as conscious agents. We also rely on these sorts of markers when we see our pets (and other ‘higher mammals’) as conscious. These ordinary markers help us interpret behaviour, seeing it as the result of actors’ desires and informational states, and they help us explain why individuals do what they do. Prescientific markers lead me to think that my dog Riddle enjoys going for walks, because he gets excited when I pick up the leash. They also lead me to think that he’s pretty attached to me, because if someone else in the family gets his leash, he looks nervously at me, worried that I won’t be accompanying the party.
By not shifting attention to other species and aspects of consciousness, we make the hard problem so much harder
The Declaration points to five consciousness markers that are the results of scientific research: homologous brain circuits; artificial stimulation of brain regions causing similar behaviours and emotional expressions in humans and other animals; neural circuits supporting behavioural/electrophysical states of attentiveness, sleep and decision-making; mirror self-recognition; and similar impacts of hallucinogenic drugs across species. All five markers are derived markers – the outputs of scientific studies on humans and the higher mammals. The authors of the Declaration took having some of these markers as sufficient evidence for consciousness. For octopuses, their neurophysiology was deemed sufficiently complex to conclude that they are likely conscious, even though they haven’t demonstrated mirror self-recognition.
Mirror self-recognition is a behaviour that animals can perform that makes no assumptions about how the behaviour is supported physiologically. You can pass the mirror test by touching or rubbing a mark that was surreptitiously made on your body. Children pass this test around 18 months. Great apes, dolphins, cleaner fish, magpies, Asian elephants and, most recently, ghost-crabs have passed this test too. But mirror self-recognition is just one marker, and the other markers emphasise neurophysiology, reflecting the neurocentrism proposed by Crick and Koch. Passing a behavioural test can offer some evidence of consciousness but, for the Declaration, the robust evidence comes from having the right kind of neuroanatomical, neurochemical and neurophysiological properties. It’s this emphasis on the neurological that may be holding the science back. Similarity to human physiology can support conclusions that other animals are conscious, but we shouldn’t take our physiology to be necessary for consciousness. By engaging in animal research in the first place, researchers are already endorsing multiple realisability – the view that mental capacities can be instantiated by very different physical systems. When we look only at slightly different physical systems, we may be overlooking the key piece to the consciousness puzzle.
The anthropocentrism in Crick and Koch’s original proposal, perhaps surprisingly, led to new conclusions about other conscious animals. This shift away from humans might be seen as an invitation for scientists to profitably study consciousness in new species such as octopuses. However, in the past 10 years there has not been a big shift in the species studied, with most labs still focusing on vision in humans and monkeys, and still committed to the idea that consciousness correlates to the degree of complexity of the nervous system. Change can be hard, and expensive, especially when it centres around primate research. But by not shifting attention to other species and other aspects of consciousness, we’re making the hard problem so much harder.
Vision might seem like a simple conscious capacity to study if you compare it with self-consciousness as Crick and Koch did, but the mammalian visual system is a highly evolved feature of the nervous system, appearing more than 200 million years ago. That’s a long time for a system to gain complexity. The proposal to investigate consciousness by studying simpler systems follows standard scientific procedure, since much progress in science comes from studying simpler cases before moving on to more complex ones.
Our current neurocentrism is in tension with the method of studying simpler systems first. Studying convoluted examples of conscious animals to understand consciousness is like reverse-engineering the electric calculator to understand how machines perform addition, rather than starting with the abacus. In biology, model organisms like the nematode worm Caenorhabditis elegans have been a significant player in many of our scientific discoveries of the past 80 years, with their simple nervous system and easily observable cell development and death. These microscopic worms are being used to study phenomena from nicotine addiction to ageing. Why not use them to study consciousness, too?
The answer to that question is also simple: such animals are not presumed to be conscious. We see this view expressed again and again in the consciousness literature. The philosopher Michael Tye writes in Tense Bees and Shell-shocked Crabs (2016): ‘Since worms have only around 300 neurons, it seems a great leap of faith to suppose they could genuinely feel pain.’ The worry is repeated by the neuroscientist Anil Seth in his book Being You (2021): ‘By the time we reach the nematode worm with its paltry 302 neurons I find it difficult to ascribe any meaningful conscious status …’
If future AI systems are anything like current ones, they will closely resemble us in terms of linguistic behaviour
The opinion that worms aren’t conscious is reflected in a 2020 survey of philosophers’ opinions about major philosophical issues, which included a question about what sorts of entities are conscious. A majority of philosophers accept (or lean toward accepting) consciousness in adult humans (95.15 per cent), in cats (88.55 per cent), in newborn babies (84.34 per cent), and in fish (65.29 per cent). There is quite a bit more scepticism about flies (34.52 per cent), worms (24.18 per cent), and plants (7.23 per cent). It is quite telling – and note this survey was conducted before the introduction of Chat-GPT – that 39.19 per cent of philosophers surveyed think future AI systems will be conscious.
If future AI systems are anything like current AI systems, they will not have neurons, but they will closely resemble us in terms of linguistic behaviour. Today, even as scientists approach the question of consciousness by examining neural correlates, we are wondering about nonbiological consciousness in AI systems. The question of AI consciousness sits uneasily next to the neurocentrism of current science. It may be that the anthropocentrism drives opinions about what is conscious more than the neurocentrism. Neurocentrism is a consequence of the anthropocentric reasoning that drives consciousness research, with mammalian-like nervous systems being identified as the key feature. If Chat-GPT encourages researchers to move away from neurocentrism, we may end up back with the language-centrism that Griffin worked to undermine. That would not be productive science.
But there is another move, and that is to extend animal research beyond the current laser focus on mammalian brains. Crick and Koch proposed the study of the visual system because we already know a lot about it, and it is similar across mammals. Presumably, they also proposed the study of the visual system because they thought vision generally involves conscious experience. Vision is one sensory modality, and though it is widely shared across many taxa of animals, it isn’t the only sensory modality, and it isn’t the sensory modality that evolved first in the animal lineage. Chemoreceptors sense chemical properties, as in taste and smell, and these exist across animal taxa, including C elegans. Such sensory capacities allow C elegans to sense taste, smells, temperature and motion, and to form associations through habituation and association. The worms habituate to manual taps, and learn to avoid salt ions that have been previously paired with garlic. They learn, they have memory, and they move through their environments toward the things they need and away from the things they don’t.
A few researchers are studying consciousness in invertebrates, but this research tends to be focused on identifying markers that provide evidence that the animal is conscious. For example, recent research on bumblebee consciousness has focused on identifying markers for pain experience, and a report commissioned by the UK Department for Environment, Food and Rural Affairs presented evidence of pain experience in crabs and octopuses.
What might we learn if our anthropocentrism didn’t lead us to focus on the brain as the relevant part of physiology needed for consciousness, but instead led us to examine the behaviours that are associated with experiences? We could then study the nature of consciousness by looking at bees, octopuses and worms as research subjects. All these animals have a robust profile of behaviours that warrant the hypothesis that they are conscious. Moving away from painful stimuli, learning the location of desirable nutrients, and seeking out what is needed for reproduction is something we share widely with other animals. By studying other animals such as C elegans, animals that show evidence of associative learning and that have sensory systems, we can significantly simplify research on consciousness.
Studying consciousness in animal species who lack a mammalian nervous system doesn’t help the science avoid anthropocentrism. We are still starting with the case of humans, and considering the sorts of behaviours we engage in that are associated with conscious experience – perceiving the environment, sensing pain and pleasure. And that’s OK. Anthropomorphism is unavoidable in the science of consciousness studies as much as it is unavoidable in our other sciences. This is because we are human, and we see things from our human perspective. As the philosopher Thomas Nagel pointed out, there is no view from nowhere. Instead, there are views from various perspectives. As humans, we have some shared perspectives given typical human physiology and life history. But we also have perspectives that are incredibly different from one another. The perspective reflected in the survey of philosophers that infant humans are probably conscious, that fish may be, and that plants probably are not, is a cultural perspective that reflects the demographics of today’s professional philosophers. What might the starting assumptions about consciousness look like if professional philosophers were not overwhelmingly white, male and WEIRD?
We should push back on the popular view of our time that a complex brain is needed for consciousness
The science can start with understanding consciousness as a property of humans, and still sit with surprising and perhaps disturbing cases of consciousness in unfamiliar places – in animals whose lives are largely hidden from us due to their size, morphology or habitats. Finding similarities between us and the smallest and simplest animals might make some uneasy, but such similarities also raise intriguing puzzles and give us more pieces we can use to solve the problem.
Focusing on model organisms closely related to humans was perhaps important in the 1990s and 2000s, when Crick and Koch launched the scientific study of consciousness. At that time, there was still significant scepticism in some quarters about animal consciousness due to language-centrism. Today, we see that accepting the premise that ‘higher mammals’ are conscious hasn’t led to a theory of consciousness, but it has led us to accept more animals as conscious. It’s time we turn consciousness research to include these other species, too.
Just as Crick and Koch pushed back on the popular view of their time that language is needed for consciousness, today we should push back on the popular view of our time that a complex brain is needed for consciousness. Maybe in another few years we will need to push back on another assumption, and at that time find it profitable to study consciousness in biological systems beyond animals, such as plants and fungi. If we recognise that our starting assumptions are open to revision and allow them to change with new scientific discoveries, we may find new puzzle pieces, making the hard problem a whole lot easier.