Friday , May 3 2024
Home / Socialdem. 21st Century / Alan Turing’s “Computing Machinery and Intelligence”

Alan Turing’s “Computing Machinery and Intelligence”

Summary:
Ken B, take note!Since I am getting flack for being skeptical about the Turing test, let me review Alan M. Turing’s original paper “Computing Machinery and Intelligence” (1950).As a matter of pure historical interest, one of the first people to imagine intelligent machines was the 19th century novelist Samuel Butler in the novel Erewhon (London, 1865), which is actually cited by Turing in his bibliography of this paper (Turing 1950: 460). A case of life imitating art?Anyway, I divide my post below into two parts: I. Turing’s Paper “Computing Machinery and Intelligence”: A Critical SummaryII. Critique of the Turing Test. Turing’s paper is based on the type of truly embarrassing and crude behaviourism that was fashionable in the 1940s, and, oh my lord, it shows.So what was behaviourism? I quote from the Internet Encyclopedia of Philosophy: “Behaviorism was a movement in psychology and philosophy that emphasized the outward behavioral aspects of thought and dismissed the inward experiential, and sometimes the inner procedural, aspects as well; a movement harking back to the methodological proposals of John B. Watson, who coined the name. Watson’s 1913 manifesto proposed abandoning Introspectionist attempts to make consciousness a subject of experimental investigation to focus instead on behavioral manifestations of intelligence. B. F.

Topics:
Lord Keynes considers the following as important: , , , , , , ,

This could be interesting, too:

Mike Norman writes Why the Mind Cannot Just Emerge From the Brain — Robert J. Marks and Michael Egnor

Mike Norman writes The gorilla problem — Diane Coyle

Stavros Mavroudeas writes 4th Industrial Revolution: Myth or Reality?

Mike Norman writes Tam Hunt — Could consciousness all come down to the way things vibrate?

Ken B, take note!

Since I am getting flack for being skeptical about the Turing test, let me review Alan M. Turing’s original paper “Computing Machinery and Intelligence” (1950).

As a matter of pure historical interest, one of the first people to imagine intelligent machines was the 19th century novelist Samuel Butler in the novel Erewhon (London, 1865), which is actually cited by Turing in his bibliography of this paper (Turing 1950: 460). A case of life imitating art?

Anyway, I divide my post below into two parts:

I. Turing’s Paper “Computing Machinery and Intelligence”: A Critical Summary

II. Critique of the Turing Test.

Turing’s paper is based on the type of truly embarrassing and crude behaviourism that was fashionable in the 1940s, and, oh my lord, it shows.

So what was behaviourism? I quote from the Internet Encyclopedia of Philosophy:

“Behaviorism was a movement in psychology and philosophy that emphasized the outward behavioral aspects of thought and dismissed the inward experiential, and sometimes the inner procedural, aspects as well; a movement harking back to the methodological proposals of John B. Watson, who coined the name. Watson’s 1913 manifesto proposed abandoning Introspectionist attempts to make consciousness a subject of experimental investigation to focus instead on behavioral manifestations of intelligence. B. F. Skinner later hardened behaviorist strictures to exclude inner physiological processes along with inward experiences as items of legitimate psychological concern.”
Hauser, Larry. “Behaviorism,” Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/behavior/

Ouch! The very essence of behaviourism was to abandon the study of internal mental or biological states of human beings and human consciousness to focus on outward “behavioural manifestations of intelligence.” Behaviourism had no interest in the internal explanation of human mental states and intelligence, and, instead, focused on external signs of it.

Behaviourism led to some real intellectual disasters in 20th century social sciences and philosophy. It shows up all over the place.

For example, B. F. Skinner’s Verbal Behavior applied the behaviourist paradigm to linguistics in a deeply flawed manner, which was brought out by Noam Chomsky’s now famous 1959 review of that book (Schwartz 2012: 181).

Even the analytic philosopher Willard Van Orman Quine’s misguided attempt to deny a valid distinction between analytic and synthetic propositions is a legacy of crude verbal behaviourism.

I. Turing’s Paper “Computing Machinery and Intelligence”: A Critical Summary
Turing divided his paper into the following sections:

(1) The Imitation Game
(2) Critique of the New Problem
(3) The Machines concerned in the Game
(4) Digital Computers
(5) Universality of Digital Computers
(6) Contrary Views on the Main Question
(7) Learning Machines.

Let us review them one by one.

(1) The Imitation Game
Turing proposes to answer the question: “Can machines think?” He rightly notes that any sensible discussion should begin with a definition of “machine” and “think” (Turing 1950: 433).

Unfortunately, no such definition is given. Instead, Turing rightly notes that a serious definition would not simply rely on an opinion poll of what people think, but then the issue is thrown aside. In place of a definition, Turing proposes the “Imitation Game.”

In this game, we have three players, A, B, C.

A is a machine, B a human being, and C a person who asks questions indirectly of A and B (who remain hidden from C). If C asks questions of A and B at length and cannot determine who the computer is and who the human is, then the machine is deemed to have passed the test (Turing 1950: 433–434).

At this point, it should be obvious that the “Imitation Game” is the Turing Test.

(2) Critique of the New Problem
Since the machine or computer remains hidden, Turing emphasises that this is a test of behaviour or “intellectual capacities,” not, say, external appearance (Turing 1950: 434).

The “best strategy” for the machine is “to try to provide answers that would naturally be given by a man” (Turing 1950: 435).

At this point, Turing raises the crucial question:

“May not machines carry out something which ought to be described as thinking but which is very different from what a man does? This objection is a very strong one, but at least we can say that if, nevertheless, a machine can be constructed to play the imitation game satisfactorily, we need not be troubled by this objection.” (Turing 1950: 435).

Just look at how Turing flips this question off.

Since Turing has no interest in defining “intelligence” and commits himself to a crude behaviourist test, his nonchalant attitude here is understandable.

But it remains a devastating problem with his whole approach and, moreover, undermines the worth of the Turing test.

For the question Turing raises – can a machine “carry out something which ought to be described as thinking but which is very different from what a man does?” – cannot be evaded. It is at the heart of the problem.

Let us propose two preliminary definitions of “intelligence” as follows:

(1) information processing that takes input and creates output that allows external behaviour of the type that animals and human beings engage in, and

(2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.

Now a behaviourist would be interested in (1), but can ignore (2).

But (2) is the philosophically and scientifically interesting question.

(3) The Machines concerned in the Game
Turing now clarifies that by “machines” he means an “electronic computer” or “digital computer”: he only permits digital computers to be the machine in his Turing Test (Turing 1950: 436).

It is also explicit in the paper as well that Turing envisaged future computers as being sophisticated enough to pass the test, not the computers of his own time (Turing 1950: 436).

(4) Digital Computers
In this section, Turing explains the nature and design of digital computers.

But to make a computer mimic any particular action of a human being, the instructions for that action have to be carefully programmed (Turing 1950: 438).

(5) Universality of Digital Computers
Turing discusses discrete state machines (Turing 1950: 439–440), and points out that digital computers can be universal machines in the sense that one computer can be specifically programmed to compute a vast array of different functions (Turing 1950: 441).

Turing now returns to his main question:

“It was suggested tentatively that the question, ‘Can machines think?’ should be replaced by ‘Are there imaginable digital computers which would do well in the imitation game?’ If we wish we can make this superficially more general and ask ‘Are there discrete state machines which would do well?’ But in view of the universality property we see that either of these questions is equivalent to this, ‘Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?’” (Turing 1950: 442).

(6) Contrary Views on the Main Question
Unfortunately, Turing’s answer to the question whether computers can think is just as nonchalant as in section 1:

“It will simplify matters for the reader if I explain first my own beliefs in the matter. Consider first the more accurate form of the question. I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent. chance of making the right identification after five minutes of questioning. The original question, ‘Can machines think?’ I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” (Turing 1950: 442).

The very question “Can machines think?” is dismissed by Turing as “too meaningless to deserve discussion.”

There is not even any attempt to properly delineate or define the sense in which the word “intelligence” might be understood.

For Turing, all that matters is that the computer can successfully play the imitation game.

Turing then turns to objections which might be proposed to the idea that computers can think:

(1) The Theological Objection
This is the objection that human beings have an immaterial essence or soul that makes them intelligent. Turing dismisses this.

(2) The ‘Heads in the Sand’ Objection
This is really nothing more than the objection that machines being able to think is a horrible idea. Turing again responds that such an emotional response will not do.

(3) The Mathematical Objection
Turing considers the limitations of discrete-state machines in relation to Gödel’s theorem.

Turing replies to this by pointing out that the Imitation Game’s purpose is to make a computer seem like a human, so that incorrect answers or the inability to answer logical puzzles wouldn’t necessarily be a problem.

(4) The Argument from Consciousness
It is here we come to what should be the most interesting part of the paper.

Turing quotes a critic who rejects the idea that machines can ever equal the human brain:

“This argument [sc. from consciousness] is very well expressed in Professor Jefferson’s Lister Oration for 1949, from which I quote. ‘Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain—that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.’” (Turing 1950: 445–446).

Now this goes too far in demanding that the machine needs to directly experience human emotion, because it is, I imagine, possible for a human being to not feel emotion from brain disorder but still be conscious and intelligent.

Nevertheless, the demand that a machine would have to be fully conscious like us to be the equal of the conscious intelligent human mind is sound.

What is Turing’s response? Turing says that Jefferson demands that “the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking” – that is, we need solipsistic proof (Turing 1950: 446).

But that is a straw man and misrepresentation of what Jefferson said. Jefferson said not that he would need to be the machine to be convinced that it is the equal of the human brain, but that it must have the same conscious life as the human brain. Turing’s dismissal of this will not stand.

Turing then argues that if a computer could actually write sonnets and answer aesthetic questions about it, then it could pass the Turing Test and meet Jefferson’s demand (Turing 1950: 447).

(5) Arguments from Various Disabilities
Here Turing replies to critics who argue that, no matter how sophisticated a computer is, there must always be something new that they cannot do that humans can do.

Turing responds to this by saying that computers with more memory and better and better programs will overcome such an objection (Turing 1950: 449).

(6) Lady Lovelace’s Objection
This stems from objections made by Lady Lovelace to Babbage’s Analytical Engine.

Lovelace essentially said that such machines are bound by their programs and cannot display independent or original behaviour or operations (Turing 1950: 450).

Turing counters by arguing that sufficiently advanced computers might be programmed to do just that by learning (Turing 1950: 450).

(7) Argument from Continuity in the Nervous System
Here we get an interesting objection: that human brains have nervous systems and so what they do arises from a different biological substrate from that of electronic computers:

“The nervous system is certainly not a discrete-state machine. A small error in the information about the size of a nervous impulse impinging on a neuron, may make a large difference to the size of the outgoing impulse. It may be argued that, this being so, one cannot expect to be able to mimic the behaviour of the nervous system with a discrete-state system.

It is true that a discrete-state machine must be different from a continuous machine. But if we adhere to the conditions of the imitation game, the interrogator will not be able to take any advantage of this difference.” (Turing 1950: 451).

So Turing here dismisses the objection by saying that this doesn’t matter provided that the computer can fool an interrogator.

But this behaviourist obsession only with external output will not do. If we wish to answer the question “can computers attain the same conscious intelligence as people,” there must be a serious attempt to answer it. There is none.

(8) The Argument from Informality of Behaviour
The argument here is that human behaviour encompasses a vast range of activities and choices that cannot be adequately listed or described in rule books or programs.

Turing replies that nevertheless overarching general rules of behaviour are sufficient to make machines appear as humans (Turing 1950: 452).

(9) The Argument from Extra-Sensory Perception
In a quite bizarre section, Turing raises the possibility that human beings have extra-sensory perception, such as telepathy, clairvoyance, precognition and psycho-kinesis (Turing 1950: 453), that machines can never have.

Turing even states that “the statistical evidence, at least for telepathy, is overwhelming” (!!) (Turing 1950: 453). This would pose problems for the Turing Test apparently, though Turing’s argument is rather confused.

One solution, Turing suggests, is that we would need to have a “telepathy-proof room” in Turing Tests! (Turing 1950: 453).

(7) Learning Machines
In the final section, Turing suggests that the human mind is fundamentally a mechanical system, though not a discrete-state machine (Turing 1950: 455).

Turing thought that by the year 2000 there would be a definitive answer to question whether machines can regularly pass the Turing test, given a tremendous increase in memory and complexity of programming (Turing 1950: 455). Turing suggests that the first step is to create a computer that can simulate the answers of children and be a type of learning machine (Turing 1950: 456–459).

II. Critique of the Turing Test
Let us return to the two definitions of “intelligence” as follows:

(1) a quality in human digital computers in which information processing takes input and creates output that allows action or sentences, or external behaviour of the type that animals and human beings engage in, and

(2) information processing that takes input and creates output that is accompanied by the same type of consciousness that human beings have.

If we define “intelligence” in sense 1, then of course you can say that computers are “intelligent,” and not just computers that pass the Turing Test.

But if we focus on sense (2) – which is the philosophically and scientifically interesting question – it is not at all clear that computers have or can ever have the same *conscious* intelligence that human beings have.

Now the behaviourist Turing Test obviously influenced the functionalist theory of the mind, which holds that functional mental states may be realized in different physical substrates, e.g., in a computer. This freed functionalist psychologists and artificial intelligence researchers from a strict dependence on neuroscience. Since, for the functionalists, mental states are abstract processes that can be created in multiple physical systems, AI researchers using functionalism were free to study mental processes in a way that did not reduce their disciples to mere study of brain neuroscience and its physics and chemistry.

Those AI researchers following Turing adopted a top-down “Good Old Fashioned AI” (GOFAI) (or symbolic AI) research program and really thought that they could create artificial intelligence as rich as human intelligence. But their attempts to create a human-level artificial intelligence ended in miserable failure, and manifestly did not succeed. Of course, we did get some very useful technology out of their research, but none of these computers can remotely approach the full level of human intelligence.

Where did AI go wrong?

Let me start to answer that question by reviewing John Searle’s Chinese Room argument.

This was first presented in 1980, and Searle imagines a room in which he is present, with an opening through which paper can be pushed inside or outside. Searle receives papers through the slot in the room. On the paper are symbols which Searle can look up in a complex system that allows him to match the symbols and then to write another set of symbols on a paper which he can then pass out of the room. The system is a complex set of algorithms that provide him with instructions for producing new symbols. The symbols that Searle manipulates are in fact written Chinese and the responses will be intelligent answers to Chinese questions. Searle argues that, because he does not understand Chinese, a mere process that manipulates symbols with a formal syntax can never actually understand the meaning of the symbols. An understanding of real meaning, in other words, is impossible for such a system. We need a human mind with consciousness and intentionality for that.

This argument deployed by Searle is directed against Strong AI’s classical computational theory of mind, not just against Turing’s “Imitation Game,” for as we have seen Turing did not even care or address the question whether machines were conscious like human beings. According to the classical computational theory of mind, a sufficiently complicated program run on a digital computer could do what the human mind does, and this program itself would also be a mind, with the same cognitive states (consciousness, sensation etc.) that a human mind has.

Searle believes that his Chinese room argument shows that Strong AI is completely mistaken, and that mere algorithmic manipulation of symbols with syntax can never produce a conscious mind with perception, sensation and intentionality. Searle believes that he has shown this because the symbols manipulated in the Chinese room could be any type of information (e.g., text of any language or auditory or visual information), yet the person or system that manipulates them does not understand their meaning. In Searle’s view, the computation is purely syntactic—there is no semantics (Boden 1990: 89). So Searle argues that, if the person manipulating the symbols does not understand their meaning, then no computer can either if it only uses a rule-governed symbol manipulation (Searle 1980: 82).

Searle also criticises the Turing Test in the Chinese Room argument, since the room, if one imagined it as a computer, could pass the Turing test but still have no understanding. The mere appearance of understanding, then, is no proof of its existence. This, Searle argues, is a serious flaw in Strong AI, because the Turing Test is deeply dependent on a mistaken behaviourist theory of the mind (Searle 1980: 85). Turing might reply that he does not even care if the computer has understanding or consciousness, and so is not even concerned with the question whether a computer can attain “intelligence” in sense (2) above.

Searle has also criticised the connectionist theory in a modified version of the Chinese Room argument, which he calls the Chinese Gym (Searle 1990: 20–25).

The responses to Searle are various, but connectionist critics of Searle argue that we cannot predict what emergent properties might arise in computers with vast amounts of parallel processing and vector transformations (Churchland 1990: 30–31).

But, of course, unless science properly understands precisely how consciousness emerges from the brain, there is no definitive answer.

For John Searle, the biological naturalist theory of the mind is the best one we have, and human minds are a type of emergent physical property from brains and necessarily causally dependent on the particular organic, biological processes in the brain. Synthetic digital computers cannot attain consciousness because they lack the physically necessary biological processes.

Another point is that, if we say that the human brain involves information processing, then surely it must be the case that a very different type of information processing goes on in the brain compared with that which occurs in digital computers, and clearly many aspects of the human mind are not computational anyway.

Finally, if a computer passes a Turing Test, then all it demonstrates is that it is possible to simulate the verbal behaviour of human beings. It does not follow a computer can have conscious intelligence in sense (2) above.

My final point is mainly flippant.

Recently, another amusing chapter in the story of AI happened: Microsoft’s AI Chatbot called “Tay.”

Microsoft launched its chatbot Tay on Twitter, an AI program that writes tweets by learning from people who chat to it, and so supposedly learning to write Tweets and simulate conversations that sound more and more like those of a real human being.

It seems that certain people trolling this bot began frequently talking to it about highly – shall we say? – controversial things, and influencing the type of Tweets it was writing.

Tay was supposed to act like a teenage girl on Twitter. How did that experiment in AI go?

Within a day, Tay started spewing forth Tweets:

(1) that denied the truth of the Holocaust;

(2) that expressed support for Nazism and asserted that Hitler did nothing wrong, and

(3) that called for a genocide.

Not exactly another success for AI!

This is all described by the YouTube personality Sargon of Akkad in the video below.

N.B. Some bad language in the video and things in bad taste!

BIBLIOGRAPHY
Boden, M. A., 1990, “Escaping from the Chinese Room,” in Boden, M. (ed.), The Philosophy of Artificial Intelligence. Oxford University Press, Oxford. 89–104.

Chomsky, Noam. 2009. “Turing on the ‘Imitation Game,’” in Robert Epstein, Gray Roberts and Grace Beber (eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, Dordrecht. 103–106.

Churchland, Paul, Churchland, Patricia, 1990, “Could a Machine Think,” Scientific American 262.1 (January): 26–31.

Churchland, Paul M. 2009. “On the Nature of Intelligence: Turing, Church, von Neumann, and the Brain,” in Robert Epstein, Gray Roberts and Grace Beber (eds.), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Springer, Dordrecht. 107–117.

Hauser, Larry. “Behaviorism,” Internet Encyclopedia of Philosophy
http://www.iep.utm.edu/behavior/

Schwartz, Stephen P. 2012. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley-Blackwell, Chichester, UK.

Searle, J., 1980, “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–458.

Searle, J., 1990, “Is the Brain’s Mind a Computer Program,” Scientific American 262.1 (January): 20–25.

Turing, Alan. 1936. “Computable Numbers, with an Application to the ‘Entscheidungsproblem,’” Proceedings of the London Mathematical Society 42.2: 230–267, with 43 (1937): 544–546.

Turing, Alan M. 1950. “Computing Machinery and Intelligence,” Mind 59.236: 433–460.

Lord Keynes
Realist Left social democrat, left wing, blogger, Post Keynesian in economics, but against the regressive left, against Postmodernism, against Marxism

Leave a Reply

Your email address will not be published. Required fields are marked *