Tuesday , October 22 2024
Home / Robert Skidelsky / Will Artificial Intelligence replace us? – The Article Interview

Will Artificial Intelligence replace us? – The Article Interview

Summary:
July 19, 2024 This essay falls into three parts. First, I  discuss the question of what it is which makes  humans unique — that is, irreplaceable.  Second, I consider whether  machines  on balance  enhance or diminish humanness.   This has become an issue of the moment  with the growth of machine intelligence. Finally, I try to answer two questions: how can we secure our survival as  human beings? Is it worth trying to do so? A quick preview of  my answer to the first question. Some bits of humans are  clearly replaceable. They fall into the category of  spare parts. The bits which aren’t are what used to be called soul and which we now call mind: in religious language,  the bits which link  us to the Divine. We urgently need to decide  which bits should and should not be

Topics:
Robert Skidelsky considers the following as important: , , , , , ,

This could be interesting, too:

Robert Skidelsky writes The Roots of Europe’s Immigration Problem – Project Syndicate

Michael Hudson writes Sovereignty in Crisis: Israel, Palestine, and America’s Global Agenda

Robert Skidelsky writes Speech in the House of Lords Conduct Committee: Code of Conduct Review – 8th of October

Robert Skidelsky writes Speech in the House of Lords on Watchdogs 9th of September

July 19, 2024

This essay falls into three parts. First, I  discuss the question of what it is which makes  humans unique — that is, irreplaceable.  Second, I consider whether  machines  on balance  enhance or diminish humanness.   This has become an issue of the moment  with the growth of machine intelligence. Finally, I try to answer two questions: how can we secure our survival as  human beings? Is it worth trying to do so?

A quick preview of  my answer to the first question. Some bits of humans are  clearly replaceable. They fall into the category of  spare parts. The bits which aren’t are what used to be called soul and which we now call mind: in religious language,  the bits which link  us to the Divine. We urgently need to decide  which bits should and should not be  replaced so as to avoid being replaced entirely.

1.What is unique about being human?

The  Christian  answer is that  humans  were created in God’s image and carry within them a spark of the Divine. Uniquely among living things, we carry the image (or in some versions the actual substance) of His perfection in us. This is at the root of the human  striving for perfection.

We alone of living things have been gifted with awareness of who we are, what we are doing,  and of our relations with the world.

We never meet  a pig that knows it  is a pig. Indeed, the word “meet” is questionable, for it signifies mutual recognition.

The palpable gap, the yawning chasm, between  humans and animals is the big flaw in Darwinian evolutionary theory. The break between brain and mind is  inexplicable in purely evolutionary terms.

Another way of saying this is that the mind can’t be reduced to the brain. An artificial brain may be able to replicate everything the human brain does without being human.

Seventeenth-century philosophers  transiting from theology to science tried to keep a physical place for the soul. Descartes looked for it in the pineal gland.  The progression then was from soul to mind and from mind to brain. For the materialist, mind is simply a complex brain.

This is wrong. The brain is an information processor. Animals have brains too. The difference between humans and animals cannot be reduced to one of brain size or  complexity.

In mainstream discussion, knowledge is    equated with information: the greater the brain’s  storage and processing capacity, the greater the sum of knowledge. But as Francis Sheed says,  knowledge brings problems. If all knowledge is reduced to information, problems disappear. One can trace this slippery slope very easily in economics, for example  in  models of perfect foresight. This abolishes uncertainty and all the problems which arise from uncertainty.

The argument goes that once scientists have cracked the code of thinking, humans will be completely replaceable by machines. More ominously, they will be better at thinking than humans, because they will never go wrong, as humans still do, whether from ignorance or from neurological imbalances.  Humans  will then become permanently redundant, though they may be kept on as pets.

2. Do  machines enhance or diminish humanness?

This argument is  hard to summarise.  I would argue  that machines enhanced our humanness in the past, but are likely to diminish it in the future, as the ratio of machine capacity  to human capacity  steadily rises.

The optimistic case was most famously put by  the economist John Maynard Keynes, in his essay  “Economic  Possibilities  for our Grandchildren”,  published in 1930. It is all the more remarkable for having been written in the depth of the Great Depression, when the world seemed to be sliding downhill  into a new dark age.  Extrapolating from the technological progress in his own lifetime, Keynes calculated that his putative grandchildren — that is, people still alive today — would only need to work  three  hours a day to obtain all that they needed for a good life. The historian Arnold Toynbee echoed him: machines would make possible the “transfer of energy  from some lower sphere of being or action to a higher”.  In this prognosis , machines  were unmistakably benefactors,  one might even say the agents of God.

On the opposite side has always been  the fear of redundancy, of uselessness. The promises were for the future, while in the present machines threatened to deprive workers, not just of the means of life but of the meaning of life.

The economist David Ricardo, writing at a time when the Luddites were  smashing machinery,   believed that the substitution of  machinery for human labour “was often very injurious to the interests of the class of labourers”.  If  — remember this was at the beginning of the 19th century — a farmer sacked  fifty human workers  and replaced them by horses,  “this would not”, Ricardo  remarked soberly, “be in the interest of the men”. If machines made all jobs redundant, the human race itself would become redundant. Interestingly, Ricardo viewed horses as “machines”.  

The debate has not much progressed  since then. Fears of worker  redundancy were swept away as the Industrial Revolution made possible an increase in both population and wages, albeit at the cost of serious disruption.

The new twist in the argument today is that the current wave of automation differs from that of the past by automating mental work in addition to manual work. It’s not just the horses and manual workers who have to go on the scrap heap,  but all the white collar workers, too. Not only does technology bite ever deeper into cognitive work, but it does so at an accelerating rate. This means that there will soon be almost no jobs that robots could not do as well as, or even better than, humans. Therefore redundancies due to automation will inevitably exceed those caused by mechanisation in the past; some new jobs may be created, but far fewer than jobs destroyed. A prominent recent pessimist, the Oxford economist Daniel Susskind, claims that there are “fewer and fewer jobs which can only be performed by human beings”.

So what is to be done?  The only approach to a normative answer lies in  the understanding of humanness.

In  terms of Keynes’s story,  do machines  hasten or retard the entry to Paradise on earth?  Keynes obviously thought they would hasten it.  Machinery should make it possible to “satisfy the old  Adam in most of us” with only three hours’ work a day. With  abundance  we would  become like “the lilies of the field…they toil not, neither do they spin”.

Keynes was drawing here on  biblical imagery which would have been familiar to his readers.  Perfect humanity existed in the Garden of Eden. Then came the Fall: humanity became imperfect because it succumbed to the wiles of the Serpent,  but  there was  the possibility of Redemption and a Return to perfection.

Keynes drew on the language  (“software” in today’s  jargon)  of this story to tell a very different one.  The promise of perfection  lay entirely in the future: there was no actual past to regain.

Keynes’s story, that is,  starts with post-Edenic life, in which the ground is covered with thorns and thistles, women are condemned to bear children in pain, and  men  cursed to eat bread in  the sweat of their brows.  Paradise lies ahead, not behind us. This was the Enlightenment vision.

Interestingly,  the anthropologists James Suzman and Marshall Sahlins  have idealised the hunter-gatherer way of life, a time when humanity was free (literally) to sow its wild oats.  God’s curse was the invention of the plough.

In Keynes’s telling, the  machine points humans  towards God, but it is also devilish.  The devil in Keynes’s story is capitalism. Capitalism was the spirit of gain, of love of money, which came to dominate Europe’s economic life from the 17th century onwards. This is the argument of Max Weber in The Protestant Ethic and the Spirit of Capitalism.

Unlike Weber, for whom the capitalist was a heroic  entrepreneur,  Keynes portrays the spirit of capitalism as a neurosis  — one which future times would hand over to specialists in mental disease;  but which, by  enabling the accumulation of capital — or machinery — would lift up humanity from scarcity to plenty.  

With the coming of abundance, he writes, humanity would

“free… to return to some of the most sure and certain principles of religion and traditional virtue — that avarice is a vice, that the exaction of usury is a misdemeanour, that love of money is detestable, that those walk most truly in the oaths of virtue and sane wisdom [are] those who take least head of the morrow”.

So the Devil,  who stands for the  love of money and power, nevertheless delivers humanity back to its Creator.  By this trick  Keynes disposes of the obstacle original sin poses to his sunny future — a solution which has never been entirely convincing.  

Keynes’s progrnostication  has turned out only partly right. Since 1930 technological progress has lifted average real income per head in rich countries roughly five times,  much in line with Keynes’s expectations, but average weekly hours of full time work in these countries have fallen only by some 20 per cent,  from about 50 hours to 40. What did Keynes get wrong?

Partly it turned out that the  Devil was  lurking in more places than Keynes allows. Paradise will be gained, Keynes assures us, “assuming no important wars and no important increases in population…” With these words, Keynes briskly dismissed the most obvious impediments to the realisation of his utopia.

Any theologically alert listener or reader will  spot the big omission in  Keynes’s story.  Impulses  to  violence and the “sins of the flesh” — leading to  war and overpopulation — come near the top of the Christian list of human sins. But to Keynes, writing  in the Enlightenment spirit, these are simply contingent liabilities, to be redeemed  by improved knowledge, better education, more intelligent political management, and suchlike.

To put it in theological terms, Keynes saw human nature as Pelagian, not Augustinian. As he grew older, his belief in natural goodness  frayed.   By the 1930s he believed that civilisation was “a thin and precarious crust” and that under this veneer lay “insane and irrational springs of wickedness”.  He told Virginia Woolf that he would be inclined “not to demolish Xty [Christianity] if it were proved that without it morality was impossible; the young, he added, “were trivial, like dogs in their lusts”.  Where these thoughts would have led him had he lived longer, one can’t say.

So we return to the original question: assuming that technological progress is secured, will this be a benefit or a curse?

On the benefit  side, consider  this from a recent report of the International Labour Organisation:  

Policy debates are placing renewed emphasis on lifelong learning. This is based on the understanding that lifelong learning…increases workers’ and firms’ capability to adapt to changes in the world of work. Lifelong learning  can therefore foster productivity and innovation…and help workers transition to quality employment…”

But there is another story to tell.  John Thornhill rightly warns that, rather than augmenting human creativity, machines  may “amplify human stupidity”.  

Imagine a worker receiving instructions on how to make a pair of shoes through augmented reality goggles. All the steps in the operation are precisely choreographed. Anyone of normal dexterity will be able to manufacture an acceptable pair of shoes from such instructions. Can we really say that humans  have been upskilled or enhanced? It would be odd to say that they know how to make a pair of shoes. They are simply following instructions.  It is otiose  to call this enhancing creativity.

On the other hand, lots of genuine skills have undoubtedly been lost. Invisible mending is no longer needed because we throw away clothes which have holes. Writing by hand first yielded to the typewriter, now typists have yielded to computers, and the act of writing to predictive text software. Doctors have become medical technicians, no longer required to ‘‘know their patients’’, simply to be able to read computer print-outs of their vital functions; London taxi drivers no longer have to ‘‘do the knowledge’’, creating a mental map of the city’s streets and buildings, but just use satellite navigation.

However, it is also true that working with new technologies compels workers to learn new skills — skills which might be based on theoretical knowledge, rather than just on looking around and finding what works.

Whether automation has a tendency to enhance or reduce humanness thus depends on what we mean by human. If to be human is to be robotic, then machines enhance  humanness.  But if, as most of us  believe, it is to be more than robotic, every encroachment of robots  on our lives will reduce our humanness.

But this isn’t the end of the discussion. Like Marx, Keynes believed that the reduction of  necessity would automatically lead to an increase in freedom: indeed his economics of full employment was designed to get us over the hump of necessity and into the realm of freedom as quickly as possible. He was  curiously blind to the possibility that the machines which freed us from work might restrict  our freedom in the non-working parts of our lives.  In retrospect the entanglement of actual machines with ideas about how to organise  social and political life seems inevitable once the “science of society” took hold in the Enlightenment. In his classic The Road to Serfdom, Friedrich Hayek warned against “the uncritical transfer to the problems of society of the habits of thought of the  natural scientist and the engineer”. But it was precisely the engineering ambition of making  society as efficient as the factory or the office that built the modern world and turned Keynes’s  realm of freedom into Weber’s “iron cage of bondage”.

The ominous possibilities of information technology as an instrument of social control were dramatically visualised in Jeremy Bentham’s famous design for a Panopticon in 1786. This was an ideal prison system, in which the prison governor could shine a light on the surrounding prison cells from a central watchtower, while himself remaining unseen. This would in principle abolish the need for actual prison guards, since the prisoners, aware of being continually surveilled, would voluntarily obey the prison rules. Bentham’s ambitions for his invention stretched beyond the prison walls, to schools, hospitals and workplaces. His was a vision of society as an ideal prison, governed by self-policed impersonal rules, applicable to all. His key methodology was a one-way information system: the governor would know all about the prisoners but would himself be invisible.

Bentham’s world is coming to pass. Today’s digital control systems operate, not through  watchtowers, but through computers with electronic tracking devices, and voice and facial  recognition systems. We enter Bentham’s prison voluntarily, oblivious to its snares. But once  inside, it is increasingly difficult to escape.   Commercial platforms and governments can hope to control our habits, thoughts, and tastes by “mining” the information about ourselves with which we provide  them by using electronic devices for our convenience.  The realm of privacy recedes as the technical possibility of surveillance  expands.

Experts debate which is the greater threat — state surveillance or capitalist surveillance — but this largely (though not entirely) a sham battle. Big Business and the State both use the same technology and, more often than not, cooperate rather than compete.

Keynes was, of course, aware of the malign uses to which surveillance technology was being put in his time. But he seems to have been thrown off guard by his belief that democracies  provided sufficient safeguards against an  Orwellian outcome. He was insensitive to the possibility that surveillance might creep up,  unobserved, and even unintended, until it was too late to reverse.

3. The Accountability Issue

Increasing attention is being paid in the media as to how we might control runaway or rogue intelligence.

The game is already lost when we refer to Machine or Artificial Intelligence. Machines are information processors, not knowledge producers.  They can  think, but only, so far, within the confines of games prescribed by their makers, or  the instructions — which may be quite general — of their designers. The tipping point or singularity will come when they  are able to think for themselves. To call machines  intelligent is to reduce the difference between machine and humans to one of degree rather than kind. No defence of the principle of human uniqueness is possible on such lines.

The philosophical defence of human singularity — that humans have something called consciousness, or subjective first person awareness, and that this causes intentional behaviour — is no longer explained by the hypothesis of a metaphysical entity like the soul. The dominant science of the  matter  is that consciousness is rooted in the brain. The brain is essentially an imperfect machine. It then becomes only a matter of time for science to find a way of building machines with human level intelligence. The date for the “singularity” — the moment of machine self-awareness,  keeps being postponed because of “technical” difficulties; but that it will come sooner or later cannot be in doubt. And beyond that stretches the even more fanciful super-intelligence. “The train might not [even] pause or decelerate at Humanville station. It is likely to woosh right by,” writes the Oxford philosopher Nick Bostrom.

I end by considering a pseudo-religious movement called transhumanism which, while trying to grapple  with the problem of human singularity and how we might try to preserve it, ends up in madness.

At the heart of transhumanism is the belief that the progress of artificial Intelligence intelligence cannot be stopped, that the advance of machines to super-intelligence is bound to accelerate, and that therefore the most urgent task of the wise legislator (let us call him the philosopher king) is to ensure that the Artificial God works for the benefit of humanity and not against it.

The transhumanists are our contemporary Frankensteins. They are made up of academics at Oxford, Cambridge and MIT, variously funded by billionaire techno-utopians like Elon Musk, Peter Thiel, Mark Zuckerberg. Their core doctrine of  “effective altruism” has evolved into a vision  of immortality. The philosopher Emile Torres sees transhumanism as ‘‘quite possibly the most dangerous secular belief system in the world today’’.

Its voyage to Bedlam starts from a position of unqualified utilitarianism. The rightness of an action is to be judged solely by its consequences. The end justifies the means; no means are ruled out of court ab initio. The next step follows from the logic of counting heads. It is quantity of utility which matters, not quality. This means treating everyone’s utility the same, including that of those yet unborn.  Thus the goal is not to maximize the utility of the present generation, but of all future generations, of which this generation will form only a tiny fraction. Ethically speaking, the utility of our generation should make only a tiny claim on our moral concern. As Toby Ord puts it: ‘‘because, in expectation, almost all of humanity’s life lies in the future, almost everything of value lies in the future as well’’. Effective altruism prioritises the interests of the yet unborn over those of the present generation.

The next step in the argument identifies the goal of maximising the utility of the universe with that of maximising its intelligence potential, that is, its capacity for creating value. Humans are unique among animals in their cognitive ability. Their cognitive potential has advanced through the operation of the Darwinian ‘‘survival of the fittest’’. With billions of survivors now inhabiting the planet, humanity’s intelligence has grown to the point when it can advance without limit.

With the development of AI, humans have, for the first time, taken charge of the evolutionary process. The claim is that AIs are starting to be built which can equal the best of human intelligence, and that super-intelligent ones will follow sooner rather than later. Bostrom defines super-intelligence as ‘‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’’. Since the design of machines is one of these cognitive performances, modestly super-intelligent machines could design even better machines; there would then follow ‘‘an intelligence explosion’’. A population of ultra-intelligent AIs would take over the business of evolution from humans.  Thus ‘‘the first ultra-intelligent machine is the last invention that man need ever make’’.

The logic grinds on remorselessly. The body depends on the finite resources of our planet. But super-brains would be able to detach themselves from the limitations of the body. They might then escape from the limitations of our world, and establish colonies in our planetary ‘‘light cone’’, to be ‘‘fed’’ from its still unexhausted ‘‘endowment of negentropy’’ (or reverse entropy) in our cosmos.

Humanity’s intelligence potential could then be preserved and expanded for millions of years until the sun finally cooled. Actual humans are nothing but means to this end, and therefore valuable only insofar as they contribute to the overall net amount of value in the Universe between the Big Bang and the Heat Death. This is the philosophic/moral basis of the billionaire-financed projects of escape to the moon and other planets.

At this point, eschatological urgency seizes control of the transhumanist argument. The transhumanists share the view of the Doomsday scientists that AIs programmed with human intelligence only might quite possibly produce a nuclear or environmental catastrophe far worse than human intelligence on its own could achieve. Thus the coming of super-intelligence offers the possibility of either immortality or total disaster.

We aim to create a benevolent God, but it is always possible that, like Frankenstein, he or she may turn out to be a Deus Malignus, who might only pretend to have good intentions, but, once unchained, would set about destroying not just us, but its AI rivals. So our super-intelligent AIs must be programmed with moral rules before they take control of the future. But the only moral rules available come from our own imperfect and conflicting moral values. Wriggle as they might, transhumanists cannot escape the dilemma that there is no possibility, in a world of value relativism, of binding super-intelligence to an agreed morality. So the benevolence of our future controllers cannot be guaranteed.

While recognising the risk to humanity of super-intelligent AIs run berserk, transhumans are too entranced by their dream of a cosmic computronium to propose shutting AI down before it reaches super-intelligence. Thus Ord writes: ‘‘a permanent freeze on technology…would probably itself be an existential catastrophe, preventing humanity from ever fulfilling its potential’’.  The most they offer is a ‘‘pause for reflection’’ before allowing any further advance in AI. Such a pause, they hope, might give time for reaching global agreement on the moral rules our super-intelligent AIs need to have.

I think it is right to end my remarks on this point of madness. Remember: billions of dollars are being poured into this line of thinking and  research.  The only feasible response of sanity as I see it is a religious one. Humans must stop playing at God and start asking what it is that God wants of them.

To   regain  sanity, those  concerned with the big  issues of human survival must find some way of aligning their language with secularists engaged on the same quest, or better still finding a way of integrating them. The worst thing is for religious people to keep their religion “private” and engage with the materialists on the latter’s  terms. Theology must start advancing again after centuries of retreat.

This is an edited version of a lecture by Lord Skidelsky, given at Brompton Oratory on 13 June 2024. His book The Machine Age: An Idea, A History, A Warning was published by Allen Lane in November 2023.

Robert Skidelsky
Keynesian economist, crossbench peer in the House of Lords, author of Keynes: the Return of the Master and co-author of How Much Is Enough?

Leave a Reply

Your email address will not be published. Required fields are marked *