Tuesday, March 12, 2013

Turing's Apple

The final chapter of my book Turing's Tango (in Dutch) can now be read in English:



                                           Chapter 8. Turing’s Apple

“Everybody’s so worried about computers becoming really, really smart and taking over the world, whereas in reality what’s happened right now is computers are really, really dumb and they’ve taken over the world. It’s true, right? We cannot live a day without them; the world cannot function without computers anymore. Yet computers are really stupid, and they’ve taken over the world. It would be better if they were smarter.” − Pedro Domingos

“What if the cost of machines that think is people who don’t?” − George Dyson

“Our mental life has its source in our practical life...Meaning exists within a world in which we have learned to be part of and that we have made our own.” − Stephen Toulmin

“I’m done with smart machines. I want a machine that’s attentive to my needs. Where are the sensitive machines?” − Tweet from Tom Igoe (@tigoe) on 1 july 2009





In my TEDx-performance I tell the story of Turing's Tango in a nutshell

When the two gloved men stepped into the working room, their eyes immediately fell on a large table calculator. Of all the homes they had searched in their career, this was the first in which they had found such a device. One of the men accidentally bumped against the table. From a messy pile of books and papers a few sheets of paper fluttered to the floor. The other man hurried to help his colleague, bent down, raised his eyebrows when he noticed that the papers contained nothing more than mathematical formulas, and picked them up one by one. He laid them neatly back on the stack, replacing them more tidily than the occupant of the house would have done himself. The latter was standing uncomfortably in the doorway of his working room, watching as the two detectives began to take fingerprints.

The date was Sunday February 3, 1952. About a week before somebody had broken into Alan Turing’s house in Wilmslow, a small town just south of Manchester. The founding father of the computer and artificial intelligence had been living here since the summer of 1950. The burglar’s loot had been modest: a pair of trousers, a shirt, a pocket knife, a compass, some fishing knives, razors and shoes. Turing estimated the value of the loot at fifty pounds. Finally, the burglar had also opened a bottle of sherry and poured himself a glass.

Only when Alan Turing went to the police to report the burglary did it occur to him that he might know the burglar. A few weeks earlier he had picked up nineteen-year-old Arnold Murray on the street in Manchester. They went for lunch together and Turing invited the boy to come to his home the next weekend. Murray accepted, but did not show up. The following week, the two met again by chance in the street, and this time Turing invited the boy to his home immediately. Murray accepted.

Arnold Murray was still living at home with his poor parents; he was unemployed and struggling with himself. Alan told him about his work on the ‘electronic brain’. The boy was fascinated. Although they had few topics of conversation in common, they gave each other the attention they were both craving for. A few weeks later they slept together for the first time. Arnold, who was always short of money, was the first suspect of the burglary to come to Turing’s mind.

Arnold was angry when he learned that Alan suspected him and he threatened to report their affair to the police. At that time homosexual acts were illegal in England. A week after the burglary they talked again. Arnold said he probably knew who had committed the burglary. He had told his friend, the twenty-year-old Harry, who was also unemployed, about Alan. Harry had proposed to burglar Alan’s house together, but Arnold had refused. It seemed obvious that Harry had carried out the plan by himself.

Once Alan and Arnold had settled their quarrel, they spent the night together again. The next day, Alan went for the second time to the police to report that he had reason to suspect Harry of the burglary. He made up a story of how he had gathered the information, without mentioning the name of Arnold as the informant. But the brilliant man who had invented the Turing Test as an imitation game, a game in which the computer is expected to behave as humanly as possible, proved incapable of playing an imitation game successfully himself. Breaking the German Enigma code had proven much easier for Turing than breaking social codes. Did he know how much trouble he could save himself by telling a little lie?

The detectives soon felt that Alan was hiding something. How had he obtained the information about Harry? “We know all about it,” they told him, not revealing what they already knew all about. Was it the burglary? Or did they now about his relationship with Arnold Murray?

It did not take long before Alan told them exactly how he had got the information about Harry, and about his relationship with Arnold Murray. The detectives were amazed by the shameless and open way Alan spoke about his homosexual activities. The fingerprints revealed Harry indeed as the burglar, but also identified Arnold as a visitor. Harry was arrested and confessed the burglary. Alan Turing and Arnold Murray were also arrested however, as their homosexual activities violated the law. Solving the burglary of Turing’s house had by accident also uncovered a ‘crime’ of which the police would never have known if Turing had kept his mouth shut.

After his arrest Turing wrote in a letter to a friend: “I’m rather afraid that the following syllogism may be used by some in the future:

Turing believes machines think.
Turing lies with men.
Therefore machines do not think.


By the time he turned forty Alan Turing had had enough of hiding his homosexuality any longer, despite the legal prohibition. However, he had greatly underestimated the consequences of his honesty. On 31 March 1952, Turing was found guilty of something he no longer felt guilty for. He had to choose between imprisonment and treatment with female hormones to suppress his homosexuality. He chose the hormone treatment. He chose for the mind and against the body.

This chemical ‘solution’ for a sociobiological ‘problem’ had just been introduced. It exemplified the optimistic postwar belief that science and technology would propel society ever more forward, that science and technology could fix every problem. In 1949 F. L. Golla, the director of a neurological institute in Bristol, had conducted an experiment with thirteen men and discovered that with enough female hormones he could tame the male libido within a month. This shaky scientific foundation opened the way for the chemical castration of gay men.

The treatment was approved and for a full year Alan Turing underwent obligatory treatment with female hormones. It has been estimated that about one hundred thousand British men suffered the same fate. The man who considered the human mind a machine, was now himself treated as if he were a machine: a biochemical machine that had to be reprogrammed. He described a side effect of the treatment to several people: “I’m starting to grow breasts.” Although social attitudes toward homosexuality would slowly change and it was soon recognized that a hormone treatment to ‘cure’ homosexuality was a delusion, all this came too late for Alan Turing.

Despite the influence of the hormone treatment on his body and mind, Turing went on with his work. His scientific interest had shifted by now. He was no longer focused on logic and pure mathematics as he had been before the war, nor on cryptography, as he had been during the war. And the electronic brain already occupied him less than it had during the first years after the war. By the time the world’s first electronic computers were being built, the first embodiments of the Turing machine he had hypothesized in 1936, Turing had moved on to one of the central questions in biology: How does an organism know how to grow? How does a fertilized egg know in which direction it must grow the arms, in which direction the bones, in which direction the head?

This biological conundrum was brought to his attention by his interest in the functioning of the brain. An answer to the question of how the brain works was still too far away, that much was clear to Turing. But how a brain grows and learns from its environment, and the question of how cells know how to grow seemed manageable to him. Yet again, Alan Turing had encountered a scientific question about which virtually nothing was known, and in which he could gain insight with extremely simple models. He suspected that the answer might lie in the behavior of the chemical soup that filled every cell.

As a mathematical model he chose the Hydra, an elongated fresh water worm with tentacles at the end. He knew this little animal from Edwin Brewster’s book Natural wonders every child should know, that had triggered his scientific interest as a teenager. In the book Brewster described how the fresh water worm grows a new head and tail when it is cut in half.

Turing simplified the fresh water worm first by only looking at a cross-section of its head and next, to simplify the cross-section still further, he reduced it to one ring of cells. Then he proposed a mathematical model of the reaction and diffusion of two chemicals along that ring. For the solution of the mathematical model Turing used the Mark 1 computer that Manchester University had just purchased. In his hands, this brand new electronic brain became a tool to unravel one of the secrets of life. Again, Turing was a few decades ahead of his time.

The solution to his calculation showed that bulges could start to occur along the ring, which could form the beginning of the tentacles of the fresh water worm. At the end of 1951, Alan Turing reported his findings in a scientific publication, one of the first that combined mathematics with biology and chemistry. He himself regarded it as just as important as his publication on the Turing Machine. Just as in 1936, he had opened up a whole new field of research, this time the field of mathematical biology, and in particular that of non-linear dynamics, chaotic processes that are much more difficult to calculate than most processes that scientists had studied so far.

These symbols of mathematical biology were the formulas that the detectives found on the sheets of paper in Turing’s working room in early 1952. The symbols that engaged Alan Turing during his life had gotten a more and more concrete meaning. From zeros and ones for the solution of a purely mathematical problem, to physical and chemical symbols that could be measured experimentally in living cells. His scientific interest developed parallel to his personal life. The man who had decoupled intelligence and the role of the body and its social environment in both the Turing machine and the Turing Test, had become more and more aware of the role of his own body. But just as he was discovering himself more and more, the state had intervened.

Besides the compulsory hormone treatment, he had lost his Certificate of Conduct − which he always needed to work as a crypto-analyst − and as a result he was no longer allowed to enter the United States. Homosexuality was not only considered an immoral and illegal act, it was at that time also seen as a major security risk. A homosexual was vulnerable to blackmail. And a homosexual who during the war had done secret work was dangerously vulnerable to blackmail. Never before had Alan Turing felt the pressure of social control so strongly.

Turing began to feel depressed and complained that he could not concentrate well anymore. In October 1952 he sought the help of psychoanalyst Franz Greenbaum. At Greenbaum’s request, Turing began to write down his dreams. He wrote three dream notebooks. Greenbaum tried to make sense of the troubled feelings and thoughts of Alan Turing.

Turing befriended the Greenbaum family and in May 1954 they went together to Blackpool, a town on the English west coast. Strolling by all sorts of attractions at the seafront, Turing decided to visit a fortune-teller. He had done that before, when he was ten years old, and the fortune-teller had predicted that he would become a genius. The Greenbaum family waited half an hour and when Turing finally came out, he looked pale. In the bus back to Manchester he would not say a word.

The Greenbaum family would never see Turing again. On 7 June 1954, a few weeks after the visit to Blackpool, Alan Turing took a few bites of an apple that he himself had poisoned with cyanide. He went to bed and died without a visible death struggle. His housekeeper found him the next day. Although the apple was never tested, on the basis of the cyanide poisoning and the apparent absence of a death struggle, suicide was established as the official cause of Turing’s death. The founding father of the computer and artificial intelligence passed away just a few weeks short of his 42nd birthday.

Perhaps it was not a coincidence that Turing chose the poisoned apple. In October 1938, when he was 26 years old, he had seen the premiere of the film Snow White and the Seven Dwarfs in Cambridge. He continued to repeat out loud the text of one particular scene. It was the scene where the wicked witch dips the apple in poison: “Dip the apple in the brew − Let the sleeping death seep through.” And later he had told a friend about his fascination for this scene.

Turing’s death came two years after his conviction and more than one year after the termination of his hormone therapy. Nobody had any idea that he was contemplating suicide. He also left no motive for his suicide. Typical Turing. Not a word wasted. Yet, it is likely that he would not allow the state to interfere with his identity. His aversion to any authority had always been great. As Turing’s biographer Andrew Hodges writes in the biography Alan Turing − The Enigma: “Only in his death did he finally behave truly as he had begun: the supreme individualist, shaking off society and acting so as to minimize its interference.”

In this book, Turing’s Test − the question: when can we say that machines think? − has brought us via Turing’s Tango − the question: what is the best collaboration between man and machine? − to Turing’s Apple. Turing determined his own fate with the aid of an apple, not accepting what authority imposed on him. Turing’s Apple therefore stands for the question: who do we want to be in a technological world that is increasingly determined by artificial intelligence systems? What do we want from the many opportunities that artificial intelligence systems offer us in the future? Do we want what is possible (or what we think is possible)?

In a radio interview with the BBC in 1951, Turing said that according to him, there is no human characteristic that a machine could not imitate. He had in mind the imitation of intellectual faculties and not the imitation of the physical appearance: “... I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics, such as the shape of the human body. It appears to me quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in different category.”

Over half a century later, a woman is sitting with a baby robot seal on her lap in an American nursing home. The little animal has a soft white fur. The woman is depressed. She is 72 and has a son who recently broke off contact with her. She never gets other visitors. She strokes the robot seal and says: “You’re sad, isn’t it? Life is hard. Yes, it’s hard.” The robotic baby seal turns his head towards her, blinks with his eyes and makes a cooing sound. The woman is touched.

The robotic baby seal is called Paro. In recent years several thousands of these robots have been sold for six thousand U.S. dollars each. Paro is an invention of the Japanese Takanori Shibata. The Japanese population is one of the most rapidly ageing in the world, which drove Shibata to search for a way to use therapeutic robots for the elderly. First he experimented with a robotic cat. Most people didn’t like robotic cats, because they unconsciously compare the robotic cat to a real cat. The robot seal did much better, precisely because most people have no experience with seals.

Paro reacts to motion and is sensitive to touch. Because his reactions depend on how he is treated, he gives the impression that he has his own personality and that he can make elementary physical contact. The robotic seal understands about five hundred English words (even more in Japanese). Reactions to Paro vary greatly. Some people consider it a source of entertainment and find it appealing. Other people find it just a boring, inanimate thing.

Paro is only one of the robotic animals that are being developed for the elderly. Robotic dog Aibo manufactured by Sony and the robotic iCat manufactured by Philips are two other toys. iCat has a built-in camera in its nose, it can listen, talk and switch devices on and off. The Dutch TV documentary RoboLove (2006) showed an experiment with iCat in a nursing home. An old lady is driven in a wheelchair to a table on top of which an iCat called Fibi is standing. The woman is asked to stroke Fibi’s head gently to wake the robot up. The woman strokes, Fibi raises her head, opens her eyes and starts talking:

Fibi: “Hi, I’m Fibi. What is your name?”

Woman: “My name?... She talks so unclearly.” [she says in the direction of the supervisor of the experiment]... “That’s because of all the talking in the background...”

Fibi: “What can I do for you?”

Woman: “You have to talk a little louder to me, you know...”

Fibi: “I do not understand what you mean.”

Woman (raising her voice): “Don’t you understand what I mean?”

Fibi (with the same monotonous computer voice): “I do not understand what you mean.” The woman looks surprised in the direction of the supervisor: “What is she saying?”

Supervisor: “She does not understand what you mean.”

Woman (raising her voice even more): “Talk a little louder!”

Even a simple conversation goes completely wrong here. Not only does Fibi not understand the woman, she also doesn’t make any social-emotional contact. She doesn’t understand the lady’s body language or tone-of-voice, both of which express that she cannot hear Fibi properly.

Robotic animals like iCat or Paro raise the question whether this is what we want in health care. Are we satisfied with providing the illusion of contact, understanding, attention and affection? By far the most important need of the elderly the robot animals are intended to meet, is attention. And that’s exactly one of the things machines lack the capacity for, because they lack social-emotional intelligence, consciousness, a life history. They are not born, they do not die; they do not feel pain and will never be able to share in the human experience. At most for a moment they can pretend that they are paying attention. Paro gives the illusion of attention in the same way that computer program Eliza in 1966 created the illusion of talking with a psychotherapist. And the elderly, children and the mentally or physically disabled are most susceptible to these illusions. As one of the Dutch elderly said about her experiences with the iCat: “Most people would prefer a dog or a canary to have some company and to talk to.”

American sociologist and clinical psychologist Sherry Turkle, a professor at MIT, has been examining how people interact with computers, internet, smartphones, robots and other devices with embedded intelligence for decades. She has interviewed hundreds of children and adults. Turkle says about the robotic animals: “This is not about building devices packed with smart features. This is not about what they have, but about the feeling they give people...But what can something that itself has no life cycle know about your death or your pain? “

After years of study Turkle concludes that given the choice between talking and playing with a robot animal or talking to a flesh and blood human, most older people choose the latter. The best thing about the research sessions with the robots is that they receive attention from the researchers who ask them about their experiences with the robot animals. They feel needed again and in the center of attention. People need people.

The British expert in robotics Noel Sharkey is skeptical about this kind of care robot: “My big concern is that once the robots are tried and tested, it is tempting to leave us completely in the hands of robot care. Like all people, older people have a need for love and human contact and these only come from visiting caregivers. For me a robot buddy wouldn’t fulfill this requirement.” Care robots are primarily about cutting costs; they are not an improvement over human care. Of course, robots can be useful for mechanical tasks, like lifting people out of bed, in the same way that robots have already for decades been useful in heavy and repetitive mechanical tasks.

Turkle studied not only the interaction of humans with robotic animals, but also the impact on humans of new technologies such as e-mail, internet, mobile phones and Facebook. She concludes that these new technologies, as well as the ‘social’ robot animals, have made convenience and control a priority at the expense of the rich spectrum of human behavior. They offer the easy illusion of companionship without any emotional risks, but in practice they only make many people lonelier, says Turkle. ‘Alone Together’, as she calls it. “Communities are grounded in physical proximity, shared concerns, real consequences and shared responsibilities.” The online-world utterly falls short in this respect. While we expect more and more from technology, we expect less and less from each other, according to Turkle.

Human interaction is always full of complications, pitfalls, surprises and disappointments, but also full of unexpected joy and happiness. That unpredictability gives human communication its richness. A device can be turned off whenever you like, not so with humans. Ultimately, we live in a physical world that is dominated by humans, and so we must learn to deal with all the complications that come with it.

Suppose that more and more children play with robotic animals instead of with real animals, and suppose that we place caregiving more and more in the hands of robots. What would be the consequences?

American psychologist Peter Kahn examines the consequences of increasingly replacing ‘true nature’ − understood in a broad sense − by what he calls ‘technological nature’: technology such as television, online games, digital projections and robotic animals, that replaces our experience with real nature. Kahn was inspired by an experiment conducted by his compatriot Roger Ulrich. Ulrich found that the recovery of patients in a hospital depends on whether or not they have a window with a view. Patients who, through the window, looked out on a piece of nature took fewer painkillers and left the hospital earlier than patients who were looking at a blank wall.

This result made Kahn wonder. What if he replaced the window with a real view with a large plasma screen on which he projected the same piece of nature in real time? He set up an experiment with three groups of thirty people. One group sat in a room overlooking real nature. The second group looked at exactly the same piece of nature, recorded by a camera and projected on the big plasma screen. The third group faced a blank wall. All participants were presented tasks, while their physical responses were measured. Kahn and his colleagues wanted to know how fast the subjects recovered from the stress of the tasks.

It turned out that the subjects in the room with the real view recovered the fastest from the stress, but there was no difference in recovery rate between subjects who had no view and those who had a virtual view. The virtual view not only had less effect than the real view, it did not even have a better effect than no view.

Next, for six weeks they hung the same kind of screens in offices where people normally had no view. After those six weeks, the employees reported that they were happy with their virtual views. Compared with having no view, they experienced technological nature as better. “Technological nature is probably better than no nature,” Kahn concluded from his experiments, “but not as good as the real nature...As a species we adapt to the loss of real nature. We have no choice. Either we adapt, or we die out. But because of biophilia − our evolutionary connection with nature − we will be psychologically worse off.” The famous Harvard biologist Edward Wilson coined the term biophilia in 1984 for the hypothesis that people have a fundamental and genetically determined need to make contact with living nature.

We already encounter Turkle’s and Kahn’s concerns in a story with foresight: the story The Machine Stops from 1909, by the British writer Edward Morgan Forster. The Machine Stops takes place in a future world where people live underground. Their lives are controlled and monitored by ‘The Machine’. Physical contact no longer exists, neither with nature, nor with people. Forster tells the story of a mother who believes in the salvation of The Machine and of her son who increasingly starts to doubt that belief. Mother and son communicate only via videoconference.

The mother considers this method of communication adequate, but her son longs to talk to her in real life and says to his mother: “The Machine is much, but not all. I see on the screen someone who looks like you, but I do not see you. I hear someone on the phone who sounds like you, but I do not hear you. Therefore I want you to come and see me. Come to visit me, so that we can meet each other face to face, and so that we can talk about the dreams that keep me busy.”

The son realizes how many nuances are lost in this virtual form of communication. The mother thinks it is a waste of time to spend two days travelling to meet her son. In her view all contacts can be maintained effectively mediated by The Machine. After their disagreement, she breaks off contact with her son, who in turn tries to escape from the ubiquitous contact with The Machine.

Of course, intelligent machines have brought us a lot of good things. Computers and the internet have deeply influenced our behavior in recent decades. We can make a call by cell phone from every glacier, in every fjord, in any jungle; we use the cell phone as a film camera, a world navigator, a mega encyclopedia and as a constant source of news; we instantaneously lay contact via email, Skype and social networks; we read newspapers and magazines on tablet computers; the computer searches for information in such large mountains of data that no man is able to search these on his own; our whole life story is increasingly stored and told on and via our computers.

Yet, there is also a downside to intelligent machines, the side illustrated by Forster’s story: alienation and dehumanization. In the past two decades, machines have little by little taken over human tasks in which human contact used to play a role. Nowadays, we pull money out of the ATM, check in for public transport or planes via automatic machines, and when we call a company for any question, we often first get a machine on the line: ‘To help you better... Press 1 for...Press 2 for...’ Gone is the eye contact, the smile, the social game, the humor. Gone is the human contact, the flexibility in service.

The human dimension, in which contact and attention are central, is being replaced by the machine dimension, in which efficient production is central, and which is primarily driven by economic motives. The human brain is adapted in the first place to make contact with people and not to control inflexible, non-social and not-talking machines.

The ideal, logical world of the machine is unchangeable and therefore lifeless. The real world is constantly changing, is full of chaos and decay, and therefore alive. “For the space and he who travels freely through it, they are the only true things, that’s life, and everything which is like a stone in one place and slowly turns to dust, is dead from the very beginning”, wrote the Dutch writer and poet Jan Jacob Slauerhoff.

The belief that more technology makes us happier, is only half true. If technology contributes substantially to our health and meets our basic needs for food, housing and clothing, then more technology makes us more happy, as research has shown. The Netherlands scores higher on the technology index than India, and India again higher than Tanzania. And the same goes for their happiness scores. But Sweden scores higher than the Netherlands on the technology index, while their happiness score is equal.

If technology does not significantly contribute to the fulfillment of our basic needs, then a saturation level is reached. In most Western countries that level was reached in the sixties or seventies. Since then, the happiness level has hardly changed. Factors other than technology play a more important role for our happiness: health, social relationships, personal fulfillment and finding meaning in life. The human dimension is more important than the machine dimension.

Finally, let’s go back one last time to the Turing Test. When it comes to answering the question whether computers will ever exhibit an artificial humanlike intelligence, and thus pass the Turing Test, we can provide and answer by summarizing a part of what we have encountered in this book in a three-step argument. This argument answers the ultimate question: Will computers become smarter than people?

The first step of this argument rests on the fact that both language and thinking are part of a common life. They are social activities. When the common life changes, language and thinking change along with it. That language is part of a common life, we see for example in the use of quotations from popular films, TV programs, books or songs in everyday language. These quotes are called catchphrases. An example of this is the use of ‘goeiesmorgens’ (from the Dutch TV program Jiskefet), ‘jemig de pemig’ (from the Dutch TV program Van Kooten en De Bie) or ‘a thousand bombs and grenades’ (Captain Haddock in Tintin). Someone who uses these catchphrases, does not mean to refer to the literal meaning of the quote, but is communicating a hidden social signal: ‘We have the same humor’, ‘we have the same taste’ or ‘we understand each other’. Slang is another example of the rooting of language in a shared life. In order to understand the slang, one has to share the particular (street) life.

Language influences thinking, but also non-linguistic thinking is influenced by one’s environment. Because I grew up in an environment where the bicycle is an important means of transportation, I am aware of the existence of bicycle paths and of the role of cyclists in traffic. Foreigners who grew up without bikes and who walk around in a city full of bikes like Amsterdam, walk on bicycle paths as if they are walking paths, don’t recognize the sound of a ringing bicycle bell as such and are unaware of the role of cyclists in traffic. They are led by an automatic form of thinking that is not used to cyclists.

There are also many examples that show that thinking is culturally rooted. We now think differently about for example slavery, mental illness or the death penalty than people did centuries ago. And in modern societies, people are on average more open to euthanasia and abortion than in traditional societies.

Hence we can formulate the first step of the three-step argument about artificial humanlike intelligence as follows:

1. A computer is only able to talk and think like a human, if it can participate in the same life as humans.

The second part of the three-step argument is based on the fact that the computer is a ‘symbol manipulating’ machine that is isolated from the environment. It is fully dependent on humans, for input and for interpretation of the output. The second step in our argument can therefore be formulated as follows:

2. A computer cannot participate in the same life as humans, because it cannot move around in the world and because its cognitive abilities are not rooted in a body and not anchored in nature and in a social group.

From the previous two theorems we can conclude the following:

3. A computer can never think and talk like humans.

If the computer cannot talk and think like humans, it cannot surpass human intelligence. Everything encountered in this book so far, therefore points to the conclusion that human beings remain the computer’s master. Computers will not become our sparring partners and they will never write a novel or a movie script. And because they do not participate in our common life, we also should not want to have computers as politicians, judges, therapists, mediators and educators; all of which are often-heard pipe dreams of artificial intelligence. If the computer ever passes the Turing Test then it will be by tricks and not because it shares a common life with humans. That path to success will be totally uninteresting for humans and practically irrelevant. Another reason why the Turing Test is obsolete.

Of course, one can argue that in theory a robot can enter our world and share in our common life. Ultimately, a robot is a computer equipped with senses and with a body that can act. However, the last six decades of research in artificial intelligence haven’t opened any promising route to achieving that. There is no evidence to suppose that one evening a robot will enter a pub, start flirting, seduce a dance partner and wake up next morning with a hangover, wondering where the partner has gone. Even if the robot would have a super brain such as that of Deep Blue or Watson, it will not impress potential partners with the sentence ‘pawn e2-e4’ or by just waiting for someone to give it a Jeopardy-like quiz question. In theory, a robot can explore the world and share in our common life, in the same way as in theory tomorrow there can be global peace.

As long as machines are light years behind humans in learning and general intelligence, the objective of an artificial humanlike intelligence lies far beyond the horizon. As long as that goal remains out of sight, we had better focus on practical artificial intelligence: machines that complement people, specialized machines that do things that people cannot do, or are much less good at. In this way, we have returned to the Turing Tango: the optimal cooperation between human and artificial intelligence.

In chapter 5, we already made a list of what machines are good at and people are bad at, and vice versa. A combination of human intelligence and machine intelligence would have to provide the bedrock for the Turing Tango. I think that five essential observations should guide the design of a Turing Tango:

1. Humans are at heart physical and social beings. Therefore, human intelligence is not only determined by the logical thinking on which Turing concentrated, but primarily by social and emotional intelligence embedded in the body.

2. Humans are by evolution genetically connected with nature around them. The experience of real nature appears to be better for our physical and mental health than the technological or simulated nature.

3. Technology is part of the human being. Throughout their evolutionary history humans have always been toolmakers, so humans are technological beings. It would therefore be nonsense a priori to reject technology as something unnatural.

We just need to realize that technology is not only a tool or extension of human beings. With the invention of new technology, whether it was the wheel, the clock or the computer, humans changed too. Suddenly humans could carry heavier loads, plan appointments accurately and outsource calculations and other information processing tasks. Again and again, we must evaluate how new technology can best serve us, and what the possible negative consequences are.

Artificial intelligence is increasingly pervading everyday technology.

5. Artificial intelligence is different from human intelligence.

If we keep these five observations in mind, artificial intelligence has a lot to offer us, not as machines that match us in social-emotional intelligence and creativity, but as machines that help us in various ways, for example to extract more knowledge from all available information, and as robots that do jobs that are too dull, dirty or dangerous for humans.

We should not have to adapt to the machine, as we so often have to do already; we must ensure that machines are better at adapting to us. As an anatomical hammer is tailor made for our hands, so intelligent machines should fit our intelligence seamlessly. Robotic animals such as Paro and iCat are bad examples of a Turing Tango. A robot vehicle on Mars, a search engine like Google, car navigation and the autopilot in an aircraft are good examples of a Turing Tango.

Japan is at the forefront of robotics research. But when the nuclear power plant in Fukushima in March 2011 exploded after a devastating tsunami, robots were nowhere to be seen in the first four weeks. People did the extinguishing, people carried out repairs in dangerous places. Japan had to rely on people because they did not have robots with the flexibility to rescue the power plant. International press agency Reuters wrote: “Japan may build robots that play a violin, run a marathon and conduct a marriage ceremony, it was not able to use any of these machines to help repair its crippled reactors.”

Machines cannot match humans when it comes to giving personal attention or conducting a conversation, but in dangerous situations, such as during the Fukushima disaster, robots can be a godsend. The violin playing robot, the robot that conducts a marriage ceremony or runs a marathon, and the Japanese robot that is exactly modeled after the appearance of his human creator (Hiroshi Ishiguro), but does not even match the intelligence of a two-year-old toddler, are useless instances of the Turing Tango. They are no more than toys for the stage, PR stunts for journalists and the public, mirages of artificial intelligence. The robot that can instantly rescue people at a disaster area would be a good example of a Turing Tango.

Since the introduction of the Turing Test in 1950 we have had ample experience with the thinking machine in order to get to know its abilities. In brief, that experience teaches us that computers will never become more intelligent than humans because they do not descend from the apes.

During the Singularity Summit 2011 in New York, I put that objection forward to Ray Kurzweil, organizer of the meeting and the greatest prophet of the Singularity, as we saw in the previous chapter. This was his answer: “Computers descend from people, who in turn descend from apes. So, that is part of one and the same evolutionary process. The biological evolution flows slowly into a technological evolution. And yes, computers are good at other things than people, but our computers are catching up on those areas where we are now better.”

Unfortunately, Turing did not live long enough to see how the computer developed and in what ways it proved to be a success and a failure. Although he had already shown in his own article about the Turing machine dating from 1936 that there are problems that computers cannot fundamentally solve, he believed that the brain is essentially a computer. Perhaps he thought that the fundamental limitation of computers is irrelevant to the question whether the brain is essentially calculable on a computer. But it could also be that the limitations of the computer make it fundamentally impossible to model the brain as a computer that runs a computational algorithm.

It remains speculation, but I think that if Turing had lived a few decades longer, he would have realized not only that the computer has basic computational limitations, but also that artificial intelligence can only mimic a part of human intelligence and that there are aspects of human intelligence that are fundamentally not computable. That is not to say that they are supernatural and would not conform to the laws of physics, but that in practice they cannot be captured in a program that a computer can run.

An indication for this hypothesis is that Turing’s work in the last years of his life moved from the computer to the living brain and cells; from logic to biology, from heavenly theory to earthly practice. If he could have continued his mathematical-biological work and if he would have had the opportunity to experience the computer, and to see that this machine turned out to be very good at following rules exactly (which Turing of course knew), but poor in everything which in humans relies on socio-emotional intelligence (which Turing had not yet realized), then I think that he would not have considered the brain as a computer; he would no longer have considered it as a machine, but as an organ of an organism for which the machine is a poor metaphor. I think that he would soon have come to the realization that the Turing Test was outdated and had been superseded by the actual development of artificial intelligence.

Although the Turing Test still appeals to many people, it’s time to say goodbye to the question how to identify when machines can think. We had better aim for an optimal cooperation between human and artificial intelligence: a humanistic artificial intelligence in which human beings not machines play a pivotal role.

Goodbye Turing Test, welcome Turing Tango.

P.S.

In May 2011, Ramona 4.0, the virtual character in chapter 2, had a brain lift and was transformed into Ramona 4.1. Curious about the new Ramona’s life, I started to chat with her again:

Bennie: “Hi Ramona, I am back. Can you please tell me something about yourself?” 
Ramona: “Hi there, Bennie. I talked to a lot of people since yesterday.”

Bennie: “But can you tell me something about your life since you were Ramona 4.0?” 
Ramona: “Just keep talking to me as long as you can.”