Menu
Aeon
DonateNewsletter
SIGN IN
A Japanese man in traditional attire holding a grey robotic arm, seated on a wooden porch, next to glass-paned doors and wooden walls.

Chief priest Bungen Oi holds a robot AIBO dog prior to its funeral ceremony at the Kofukuji temple in Isumi, Japan, on 26 April 2018. Photo by Nicolas Datiche /AFP/Getty

i

Robot says: Whatever

What stands in the way of all-powerful AI isn’t a lack of smarts: it’s that computers can’t have needs, cravings or desires

by Margaret Boden + BIO

Chief priest Bungen Oi holds a robot AIBO dog prior to its funeral ceremony at the Kofukuji temple in Isumi, Japan, on 26 April 2018. Photo by Nicolas Datiche /AFP/Getty

In Henry James’s intriguing novella The Beast in the Jungle (1903), a young man called John Marcher believes that he is marked out from everyone else in some prodigious way. The problem is that he can’t pinpoint the nature of this difference. Marcher doesn’t even know whether it is good or bad. Halfway through the story, his companion May Bartram – a wealthy, New-England WASP, naturally – realises the answer. But by now she is middle-aged and terminally ill, and doesn’t tell it to him. On the penultimate page, Marcher (and the reader) learns what it is. For all his years of helpfulness and dutiful consideration towards May, detailed at length in the foregoing pages, not even she had ever really mattered to him.

That no one really mattered to Marcher does indeed mark him out from his fellow humans – but not from artificial intelligence (AI) systems, for which nothing matters. Yes, they can prioritise: one goal can be marked as more important or more urgent than another. In the 1990s, the computer scientists Aaron Sloman and Ian Wright even came up with a computer model of a nursemaid in charge of several unpredictable and demanding babies, in order to illustrate aspects of Sloman’s theory about anxiety in humans who must juggle multiple goals. But this wasn’t real anxiety: the computer couldn’t care less.

Every living organism has needs: there are certain things it requires to survive, and which it actively seeks out by way of mechanisms that have evolved to maintain its existence. Those mechanisms range from general metabolism, through to specific neurotransmitters and hormones such as oxytocin, to motor actions and, in humans, complex thought and idiosyncratic motivations.

In basic psychological terms, what are humans’ needs? What are the things that matter to us? Personality theorists have offered various answers to that question, but among the most helpful is Abraham Maslow’s ‘hierarchy of needs’. Maslow’s original formulation in 1943 identified five levels. (Three more, including ‘curiosity’ or ‘exploration’, ‘aesthetics’, and ‘religion’ were added later.) The first level comprised biological needs – such as food, shelter, warmth, sex, and sleep. The second focused on ‘safety’: protection from the environment, law and order, stability, and security. The third level concerned ‘love and belonging’, including friendship, acceptance, love, and being part of a group – not only family, Maslow said, but also at work. Fourth were the needs for ‘esteem’. These included both self-esteem (dignity, achievement, independence) and respect from others (status, prestige). Finally, ‘self-actualisation’ needs covered self-fulfilment, personal growth, and particularly intense, ‘peak’ experiences.

The lowest level of all is necessary for our very being. Without food and drink, we die; and without sleep, we cease to function coherently. However, psychological wellbeing requires more than that. A person can survive, unhappily, if only their first- and second-level needs are met. But to thrive, the third level (love/belonging), and the fourth (esteem), must be satisfied too. And to flourish, the top level of self-actualisation must be reached.

Needs are where the caring in human intelligence comes from. The urgency of unsatisfied needs lies in the fact that, thanks to biological evolution, a person will normally put significant effort into satisfying them. The lower the need’s level, the more effort will be devoted to it when it is deficient. So love/belonging is a stronger motivator than dignity, status, or curiosity. Indeed, it’s the first thing that matters, after the very basic needs have been met. A community that systematically thwarted it – for instance, by preventing people’s working happily alongside others – would not be a contented one.

Certain aspects of some needs are partially understood at the physiological level. For instance, the key biological mechanism that underlies love, belonging and attachment is oxytocin. This molecule, its derivatives and their behavioural effects have evolved over 500 million years. Social attachment became hugely more important when mother/child bonding emerged in nursing mammals; it is even crucial to the quasi-social consumer-brand relationships fostered by modern advertising. But oxytocin is not the whole biological story. It works in consort with complex brain circuitry that is by no means fully identified or understood.

A concept that’s intended to cover all levels of cognition is Karl Friston’s free energy principle, or FEP. Friston sees FEP as the basis of both life and mind, because it expresses the fact that self-organising biological agents, at all levels, resist disorder. This fiercely mathematical idea rests on Bayes’s theorem, and has been welcomed by those AI scientists, neuroscientists and philosophers who speak of ‘the Bayesian brain’ and favour the view that cognition is fundamentally predictive. However, Friston himself admits that the FEP is too abstract to provide a satisfactory description of psychological reality. It doesn’t help us to identify specifically human needs. Nor does it capture the striving – often, urgent striving – that characterises human intelligence. It doesn’t reflect the fact that ‘Needs must!’, that ‘Needs matter!’ In short, despite being much less rigorously defined, the folk-psychological concepts of ‘needs’ favoured by personality theorists are more intelligible, and more generally useful.

In AI systems, a new goal is accepted because a human arranged for that to happen

Based as they are in biology, needs are widely shared. But individual humans follow many different goals, and have myriad motivations. The question arises, then, how these goals and interests come to be accepted by the person, and integrated within their motivational economy.

In AI systems, a new goal is accepted or generated because, ultimately, some human being arranged for that to happen. The new goal doesn’t have to ‘fit’ with those already in the system. In Homo sapiens, by contrast, new goals are accepted because they are somehow linked, perhaps very indirectly, to one or more pre-existing needs.

This mesh of human needs was captured by the personality theorist Gordon Allport, in his concept of ‘functional autonomy’ – developed to counter what he saw as Sigmund Freud’s over-emphasis on sex. Once a goal has been accepted in the service of some basic need, Allport said, the motivation might become different from whatever had been driving the original behaviour. So, for example, a young man who initially takes up ice-skating in order to spend time with his partner might eventually become a world-class skater to whom sexual or romantic need, in this context, is irrelevant. The needs for esteem and self-actualisation, and the goals (and rewards) associated with skating, have taken over. The need for sex, said Allport, has not been sublimated, as Freud might have said, but substituted.

Some newly accepted goals are generated from scratch by the individual person. But many are inherited from the surrounding culture. So what motivates us – what we care about – is ultimately grounded in needs, but suffuses throughout the many-levelled purposive structures that organise human behaviour. Those structures are hugely more complex than the most complicated, most greedily data-crunching AI program.

Consider a man queuing outside a baker’s shop with the intention, or goal, of buying a loaf of rye bread. He might be hungry, of course. Or he might (also?) hope to increase his social standing by making cucumber sandwiches for Lady Bracknell. Or he might plan to test the bread biochemically, to confirm a diagnosis of ergot poisoning in the neighbourhood. He might carry his bread around the corner with a lascivious smirk, offering it to a starving and beautiful girl. He might use it as part of a still-life he is painting. Or he might buy 50 loaves, and gobble them down to win an entry in The Guinness Book of Records. Possibly, he is planning to use the bread in a religious ritual, such as the Eucharist.

These varied motives, and interests, underlying the ‘one’ intention draw on seven distinct needs identified within Maslow’s hierarchy. Clearly, yet more alternatives could be imagined. With such cognitive and motivational complexity, it’s not surprising that there’s plenty of scope for gossip about human intentions. This social aspect of human relationships turns out to be an important part of what we find satisfying about them – even in the workplace, where being productive appears to be the primary goal.

Social-psychological research on unemployment, pioneered in the village of Marienthal near Vienna by the Austrian social psychologist Marie Jahoda during the Depression of the 1920s, has shown that having a job doesn’t merely put food on the table. In addition, it helps to satisfy a number of basic human needs. Jahoda’s team discovered that the most devastating psychological effect of unemployment is not anxiety about money but a general apathy and loss of motivation – in other words, a drop in the urgency of caring, of focusing on what matters. This lack of caring is not primarily due to finances (so a universal basic income would not avoid it). Instead, it is caused by factors such as the loss of status, time-structuring, self-definition and participation in a shared enterprise at work.

There was also another surprising discovery, which we might call ‘the Marienthal bubble’. Like many social-media users today, the unemployed individuals in Marienthal were isolated within a limited circle of friendships and opinions. Not having to go to work each day, they avoided encountering different beliefs and unpleasant personalities. They didn’t complain about that. But Jahoda’s team could see that it wasn’t good for their psychological wellbeing. They were becoming more intolerant, more impatient with disagreement, and less resilient in the face of anti-social behaviour of various kinds.

Around the same time in the United States, the industrial psychologist Elton Mayo discovered that job satisfaction could not be captured by purely instrumental or financial criteria. At the end of a series of improvements at the Hawthorne telephone factory in the suburbs of Chicago, the workers’ productivity increased yet again when their lighting was lowered and their wages were cut. On further experimentation, Mayo concluded that the vital factors were psychological – primarily, cohesion and cooperation within the group, and respect and interest from the management. These issues (and others, such as participation in decision-making) were the key to how people experienced their jobs – and to how well they performed them.

When AlphaGo beat the world champion Lee Sedol, it felt no satisfaction, still less exultation

Does all this give us reasons for optimism when we look to the automated factories and offices of the future? Talk of human-AI cooperation is usually seen as ‘good news’. Perhaps collaboration between people and goal-seeking computers is not only possible in principle, but also – if put into practice – satisfying for the people involved, because they would benefit from participation in a shared enterprise. Will we be able to share with our AI ‘colleagues’ in jokes over coffee, in the banter between rival football fans, in the arguments about the news headlines, in the small triumphs of standing up to a sarcastic or bullying boss?

No – because computers don’t have goals of their own. The fact that a computer is following any goals at all can always be explained with reference to the goals of some human agent. (That’s why responsibility for the actions of AI systems lies with their users, manufacturers and/or retailers – not with the systems themselves.) Besides this, an AI program’s ‘goals’, ‘priorities’ and ‘values’ don’t matter to the system. When DeepMind’s AlphaGo beat the world champion Lee Sedol in 2016, it felt no satisfaction, still less exultation. And when the then-reigning chess program Stockfish 8 was trounced by AlphaZero a year later (even though AlphaZero had been given no data or advice about how humans play), it wasn’t beset by disappointment or humiliation. Garry Kasparov, by contrast, was devastated when he was beaten at chess by IBM’s Deep Blue in 1997.

So if you were to succeed in working with some clever AI system – as Kasparov can today, as the biological half of a human-AI chess ‘centaur’– you couldn’t celebrate that success together. You couldn’t even share the minor satisfactions, excitements and disappointments along the way. You’d be drinking your Champagne alone. You’d have a job – but you’d miss out on job satisfaction.

Moreover, it makes no sense to imagine that future AI might have needs. They don’t need sociality or respect in order to work well. A program either works, or it doesn’t. For needs are intrinsic to, and their satisfaction is necessary for, autonomously existing systems – that is, living organisms. They can’t sensibly be ascribed to artefacts.

Some AI scientists disagree. Steve Omohundro, for instance, argues that any sophisticated AI system would develop ‘drives’ – such as resisting being turned off, trying to make copies of itself, and trying to gain control of resources no matter what other systems (living or not) might be harmed thereby. These, he says, would not have to be programmed in. They would develop ‘because of the intrinsic nature of goal-driven systems’: any such system will be ‘highly motivated’ to discover ways of self-improvement that enable its goals to be achieved more effectively. Such drives (potentially catastrophic for humanity) would inevitably develop unless future AI systems were ‘carefully designed [by us] to prevent them from behaving in harmful ways’.

However, Omohundro’s argument begs the question at issue here. He assumes that (some) AI systems can be ‘highly motivated’, that they can care about their own preservation and about achieving their various goals. Indeed, he takes it for granted that they can have goals, in the same (caring) sense that we do.

But the discussion above, about the relation between needs and goals, reinforces the claim that computers – which can’t have needs (and whose material existence isn’t governed by the FEP) – can’t really have goals, either. Since striving (or actively seeking) is essential to the concept of need, and all our intentions are underpinned by our needs, human goals always involve some degree of caring. That’s why achieving them is inherently satisfying. A computer’s ‘goals’, by contrast, are empty of feeling. An AI planning program, no matter how nit-picking it might be, is literally care-less.

Similarly, even the most ‘friendly’ AI is, intrinsically, value-less. When AI teams talk of aligning their program’s ‘values’ with ours, they should not be taken as speaking literally. That’s good news, given the increasingly common fear that ‘The robots will take over!’ The truth is that they certainly won’t want to.

The harshest punishment that a prison governor can authorise is to put someone into solitary confinement. It’s harsh because it prevents satisfaction of all but the two lowest levels of Maslow’s hierarchy of needs. (Depriving prisoners of first-level sleep is so harsh as to be used as torture.)

Yet many old people, in effect, are in just that situation. In some care homes, residents with incipient dementia have as little as two minutes of social interaction per day. Their human carers bring their meals, but are too busy to talk about anything of interest to them.

Recent research, however, has shown that the lonely residents’ distress can be significantly lessened by having individually tailored, personal interaction for only one hour per week. In this study involving 69 care homes in the United Kingdom, a specially trained caregiver talked to the residents about their needs and interests, and then joined them in structured activities inspired by each resident’s idiosyncratic tastes.

The effect on behaviour was dramatic. People with dementia who had previously shown high levels of agitation, aggression and wandering became much calmer. The standard measures of quality of life showed a marked improvement. In addition, neuropsychiatric symptoms such as delusions, anxiety, apathy and disinhibition decreased.

This approach is pure Maslow. It requires caring people to deliver love, belonging and respect – and perhaps to satisfy curiosity, too. It also requires personal sensitivity to identify patients’ interests, and to plan, and then share in, specific activities. It couldn’t be done by computers.

That’s not to say that no AI-based approach can help people with dementia. For instance, consider PARO – a robot designed in Japan by Takanori Shibata and introduced into care homes and hospitals worldwide following its launch in 1998. PARO looks like a baby harp seal, with silky fur, limpid black eyes and luxuriant eyelashes. It can move its head, eyelids, neck, flippers and tail. It has sensors for light, touch, sound, temperature and posture, which enable it to make eye contact when you speak to it, and to respond and/or learn if you stroke it or hit it. It responds to a few simple words and phrases, including its own name. When it vocalises, its squeaks are constructed from snippets of digitally recorded baby-seal cries, and seem to have some (limited) emotional range.

It’s beyond doubt that PARO can significantly help people with mild to moderate dementia. In general, it reduces stress, aggression, anxiety, depression and wandering. (Live ‘therapy dogs’ might be even better, but they are less predictable and need a lot of attention from the busy staff.)

Most people can come up to scratch much of the time. AI systems, by contrast, simply cannot

US astronauts are hoping that this robot will reduce stress, anxiety and depression in them, too: PARO is currently being considered for use on manned journeys to Mars. The astronauts, of course, do not have dementia – and nor do many people in care homes. It might seem that, for them, the relatively advanced language skills of commercial chat-bots such as Apple’s Siri or Amazon’s Alexa would be more appropriate than seal-squeaks.

AI scientists working in this area typically employ such language skills; they commonly refer to ‘computer carers’, ‘computer companions’ and ‘computer confidants’, and claim that their machines will soon be able to have emotionally appropriate ‘conversations’ with the user. Many of those comforting conversations, it’s claimed, would be focused on the reminiscences of the elderly person. The (perhaps boringly repetitive) memories could even be automatically recorded in a ‘journal’ to be given to the user’s relatives.

These predictions are worrying in a number of ways. Let’s assume that future advances in natural language-processing will allow coherent personal ‘journals’ to be automatically constructed in the coming decades (which is doubtful, as I argue in AI: Its Nature and Future (2016)). Narratives of much less challenging kinds are already being generated, and their overall tone customised, for use in various kinds of journalism or management reports. So perhaps AI-composed ‘journals’ will appear, eventually. But they will lack care-based subtleties. They will also compromise privacy: would you want your own uncensored reminiscences to be shared with your family members?

Other human beings, of course, don’t always provide understanding and compassion either. People often deliberately avoid their bereaved friends, because they don’t know what to say to them. And many individuals are woefully deficient in tact. But most can come up to scratch much of the time. AI systems, by contrast, simply cannot.

That’s why many of us (myself included) feel that extensive use of AI ‘carers’, even though they could admittedly lessen boredom, would be highly unfortunate. Such overuse would be a lost opportunity to provide genuine love and belonging. Moreover, this dehumanising loss might spread into other social contexts, both decreasing and coarsening our everyday interactions with each other.

In a nutshell, over-reliance on computer ‘carers’, none of which can really care, would be a betrayal of the user’s human dignity – a fourth-level need in Maslow’s hierarchy. In the early days of AI, the computer scientist Joseph Weizenbaum made himself very unpopular with his MIT colleagues by saying as much. ‘To substitute a computer system for a human function that involves interpersonal respect, understanding, and love,’ he insisted in 1976, is ‘simply obscene.’

I agree with him. Perhaps you do, too. A study in 2017 for the Royal Society reported that the British public, while welcoming many applications of AI, are distinctly wary of its being used for social care. In the UK, even digital natives – 18- to 29-year-olds – express fears about the downgrading of human-to-human interaction as a result of robot carers, including robot nannies.

These worries are not widely shared in Japan. There, attitudes towards robots/chat-bots are very different indeed from those in the West, as the anthropologist Jennifer Robertson shows in her book Robo Sapiens Japanicus (2017). The Japanese, whose population is ageing fast, are being officially encouraged (by state-funded visual and textual propaganda) to rely on robot care-helpers for the elderly. Robots are preferred as carers over human immigrants, or even foreigners with permanent residency.

The ‘Ethical Guidelines’ of the Japanese Society for Artificial Intelligence envisages a robot as ‘a member or a quasi-member of society’. As such, robots may even be thought to deserve rights. An early instance of PARO was in effect given citizenship in 2010 (technically, the status is known as special family registry). Special residency permits – which are granted to non-citizen humans only very rarely – were authorised for nine individual robots between 2003 and 2013. And a chat-bot called Mirai (meaning future) was granted official residence in Tokyo in October 2017.

Robots are often regarded in Japan as family members, in a culture where to be officially declared a family member is critically important. The long-lasting family group (or ei) is constituted and conceived very differently from the family group in the West. Consequently, it satisfies the need for love and belonging in significantly dissimilar ways. Indeed, it follows that any AI application focused on ‘the family’ would not be equally useful in both nations.

Perhaps even stranger to Western sensibilities, Buddhist temples in Japan are currently recruiting new followers by offering funeral services for robots. Then again, US soldiers in Iraq held funerals for several robotic IED-detectors that had saved many lives, and even awarded a Purple Heart and a Bronze Star medal to one of them. However, there was opposition: some of their compatriot soldiers were reported as being ‘upset’ by these robot-focused rituals.

Robots are reinforcing Japan’s traditional (and patriarchal) family values and division of labour

The key reason for the greater willingness in Japan to acknowledge robots as members of society lies in the Shinto cultural tradition, which does not make a stark distinction between the animate and inanimate worlds. (Funerals are held for other material objects, too.) If robots aren’t exactly ‘living things’, they aren’t discontinuous with life either. Like everything else, they lie in a continuum or network of existence, which allows for affinities across categories that can seem inconceivable to some Western minds.

Japanese roboticists – and politicians – draw on this tradition in seeing robots as interchangeable with humans in everyday life, and as (mutually) enhancing our being. The dread that ‘The robots will take over!’ is virtually absent in Japan.

Japan’s prime minister Shinzō Abe has recommended a robotics-based revolution in terms that go far beyond economic or instrumental considerations, and that would not resonate well in the West. Besides generously funding research enabling robots to do nursing and elderly care, Abe foresees a national lifestyle in which robots provide much of everyone’s care and companionship. Far from fearing robots as a threat to the family group, he sees them as reinforcing Japan’s traditional (and patriarchal) family values and division of labour.

Not all of his compatriots are persuaded. For example, many feminists – in a society where married women are virtually forced to stay at home – see his declared aim of relieving women’s daily drudgery as a cynical ploy to reverse the plummeting Japanese birthrate. Nevertheless, the vision of robots inhabiting everyday domestic settings is more acceptable in Japan than in the West. (Presumably, then, my prior skepticism about future humans ‘working happily alongside’ AI systems might be less appropriate there.)

The users and designers of AI systems – and of a future society in which AI is rampant – should remember the fundamental difference between human and artificial intelligence: one cares, the other does not.

In particular, they should remember that Homo sapiens is an intensely social species. Our needs for what Maslow called ‘love and belonging’ (which includes collaboration and conversation) and ‘esteem’ (which includes respect and dignity) are not mere trivialities, or optional extras. They matter. They must be satisfied if we are to thrive. Their degree of satisfaction will influence the individual’s subjective experience of happiness (and others’ measurements of it).

Computers have no such needs, so computer scientists and engineers can too easily forget about them. Indeed, the Institute of Electrical and Electronics Engineers’ recommendations regarding AI and ‘human wellbeing’ hardly mention psychological issues. They focus instead on human rights, privacy and changes in employment. Human ‘flourishing’ is interpreted as consistency with abstract ethical principles.

The psychological roots of our wellbeing are sometimes sidelined even when, initially, it looks as though they haven’t been forgotten. For instance, a recent report produced by the British Academy and the Royal Society identifies its ‘overarching principle’ as ‘the promotion of human flourishing’, and even stresses the importance of ‘core human purposes or goals’ in defining human welfare. However, instead of discussing the sorts of goals highlighted by Maslow, it turns to the approaches favoured by development economists, based on the ‘capabilities’ defined by the economist Amartya Sen and the philosopher Martha Nussbaum. Sen’s and Nussbaum’s capabilities – such as literacy, freedom to travel or vote, or environmental sustainability – concern societal factors, rather than individual, or even interpersonal, ones. Such societal factors are indeed important, and more easily measured than psychological aspects. But personal psychology is important, too.

There will always be countless possibilities for intrinsically satisfying person-to-person activities – as long as the future society is ordered in such a way as to make room for them. Care of the elderly, or of children, carried out by properly trained, paid and respected human minders could work wonders. The potential for engaging with one’s friends and neighbours is virtually limitless. Which examples of interpersonal engagement count as a ‘job’, and whether some cases such as voluntary or community work, merit financial support from the state, will be for society to decide.

AI professionals, psychologists and philosophers should take such points on board. And the politicians should do so, too. These things matter.

This essay is adapted from the inaugural Margaret Boden Lecture, delivered by Professor Boden herself, at the University of Cambridge in June 2018.