Friday , May 3 2024
Home / Socialdem. 21st Century / John Searle on Consciousness in Artificial Intelligence

John Searle on Consciousness in Artificial Intelligence

Summary:
John Searle, Slusser Professor of Philosophy at the University of California (Berkeley), gives a Google talk below on consciousness in artificial intelligence.John Searle is a great analytic philosopher in the tradition of Bertrand Russell, and his work on AI and consciousness is particularly interesting. This talk is really insightful.Searle is summarising arguments from his paper “Minds, Brains, and Programs” (1980) and his book Consciousness and Language (2002).[embedded content]Searle makes a very interesting distinction between observer-relative and observer-independent objects. This is actually the same distinction made by Karl Popper between World 3 objects and World 1 objects.Searle also points to two important principles: (1) syntax is not semantics, and(2) simulation is not duplication. Turing machines are things designed to automatically process symbols by means of set syntactic rules or algorithms, but with no understanding of the symbols. Computers, then, are automated syntactical systems manipulating symbols but devoid of semantics.

Topics:
Lord Keynes considers the following as important: ,

This could be interesting, too:

Lord Keynes writes John Searle on Economics

John Searle, Slusser Professor of Philosophy at the University of California (Berkeley), gives a Google talk below on consciousness in artificial intelligence.

John Searle is a great analytic philosopher in the tradition of Bertrand Russell, and his work on AI and consciousness is particularly interesting. This talk is really insightful.

Searle is summarising arguments from his paper “Minds, Brains, and Programs” (1980) and his book Consciousness and Language (2002).

Searle makes a very interesting distinction between observer-relative and observer-independent objects. This is actually the same distinction made by Karl Popper between World 3 objects and World 1 objects.

Searle also points to two important principles:

(1) syntax is not semantics, and

(2) simulation is not duplication.

Turing machines are things designed to automatically process symbols by means of set syntactic rules or algorithms, but with no understanding of the symbols. Computers, then, are automated syntactical systems manipulating symbols but devoid of semantics.

Searle also rightly notes two senses of the concept of “intelligence”:

(1) an observer-relative sense of intelligence that Turing machines can have when they automatically process symbols by means of set syntactic rules or algorithms to create output from input, and

(2) the kind of intelligence with consciousness, with perception, sensation, and conscious experience, of the higher animal minds.

Sense (2) is observer-independent, intrinsic and internal.

What of computation? Searle argues that computation is not intrinsic to machines. The same distinction between observer-relative and observer-independent phenomenon can be applied to computation. People can engage in observer-independent and intrinsic computation, just as Turing’s human computers. But machine computation is observer-relative. As Popper would say, what the machine does considered as a World 1 process is not computation, but is a set of mere physical World 1 processes. The machine’s functioning becomes computation in World 3 because it takes human beings to recognise it as such and our interpretation of its physical operation. It also lacks observer-independent, intrinsic conscious intelligence.

This is perhaps the weakest point of the argument, for what about natural types of information processing as in DNA? Clearly, there must be types of natural information in World 1 that have emerged by Darwinian evolution.

But that natural information by itself or computation in Turing machines is not a sufficient condition for consciousness.

Searle also reviews the biological naturalist theory of the mind, and he notes that the creation of a truly artificial intelligence like ours would be analogous to the creation of an artificial heart. It does not matter how good your simulation of a human heart is on a computer, it does not pump blood and it is not an actual heart. An artificial system that reproduces what a heart is and does in the human body needs to reproduce its causally necessary physical attributes, even if it is not organic but synthetic. It is the same with the mind. You need to duplicate the human mind, not simulate it. Whether you need the exactly same kind of physical and biological processes in the brain, or whether it can be done with different biochemical processes or even synthetic materials is currently unknown.

In essence, Turing machines and computers are as dead, as unconscious, as senseless as rocks.

By contrast, even the lower animals have some degree of observer-independent and internal conscious intelligence, and certainly the higher animals do, just as this dog at the end of the video below.

BIBLIOGRAPHY
Searle, John. R. 1980. “Minds, Brains, and Programs,” Behavioral and Brain Sciences 3: 417–424.

Searle, John R. 2002. Consciousness and Language. Cambridge University Press, New York.

Lord Keynes
Realist Left social democrat, left wing, blogger, Post Keynesian in economics, but against the regressive left, against Postmodernism, against Marxism

Leave a Reply

Your email address will not be published. Required fields are marked *