Saturday , April 27 2024
Home / John Quiggin / Algorithms

Algorithms

Summary:
This is an extract from my recent review article in Inside Story, focusing on Ellen Broad’s Made by Humans For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on. As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical

Topics:
John Quiggin considers the following as important:

This could be interesting, too:

John Quiggin writes The war to end war, still going on

Editor writes In search of radical alternatives

Stavros Mavroudeas writes «Οι καταστροφικές επιπτώσεις της ΕΕ στην Ελλάδα και τους εργαζόμενους» – Στ.Μαυρουδέας ΠΡΙΝ 20-21/4/2024

Stavros Mavroudeas writes «Κοινωνικές επιστήμες: είδος υπό εξαφάνιση;» – εκδήλωση Παντειέρα-Attac, 23/4/2024, 5.30μμ Πάντειο

This is an extract from my recent review article in Inside Story, focusing on Ellen Broad’s Made by Humans

For the last thousand years or so, an algorithm (derived from the name of an Arab mathematician, al-Khwarizmi) has had a pretty clear meaning — namely, it is a well-defined formal procedure for deriving a verifiable solution to a mathematical problem. The standard example, Euclid’s algorithm for finding the greatest common divisor of two numbers, goes back to 300 BCE. There are algorithms for sorting lists, for maximising the value of a function, and so on.

As their long history indicates, algorithms can be applied by humans. But humans can only handle algorithmic processes up to a certain scale. The invention of computers made human limits irrelevant; indeed, the mechanical nature of the task made solving algorithms an ideal task for computers. On the other hand, the hope of many early AI researchers that computers would be able to develop and improve their own algorithms has so far proved almost entirely illusory.

Why, then, are we suddenly hearing so much about “AI algorithms”? The answer is that the meaning of the term “algorithm” has changed.

A typical example, says Broad, is the use of an “algorithm” to predict the chance that someone convicted of a crime will reoffend, drawing on data about their characteristics and those of the previous crime. The “algorithm” turns out to over-predict reoffending by blacks relative to whites.

Social scientists have been working on problems like these for decades, with varying degrees of success. Until very recently, though, predictive systems of this kind would have been called “models.” The archetypal examples — the first econometric models used in Keynesian macroeconomics in the 1960s, and “global systems” models like that of the Club of Rome in the 1970s — illustrate many of the pitfalls.

A vast body of statistical work has developed around models like these, probing the validity or otherwise of the predictions they yield, and a great many sources of error have been found. Model estimation can go wrong because causal relationships are misspecified (as every budding statistician learns, correlation does not imply causation), because crucial variables are omitted, or because models are “over-fitted” to a limited set of data.

Broad’s book suggests that the developers of AI “algorithms” have made all of these errors anew. Asthmatic patients are classified as being at low risk for pneumonia when in fact their good outcomes on that measure are due to more intensive treatment. Models that are supposed to predict sexual orientation from a photograph work by finding non-causative correlations, such as the angle from which the shot is taken. Designers fail to consider elementary distinctions, such as those between “false positives” and “false negatives.” As with autonomous weapons, moral choices are made in the design and use of computer models. The more these choices are hidden behind a veneer of objectivity, the more likely they are to reinforce existing social structures and inequalities.

The superstitious reverence with which computer “models” were regarded when they first appeared has been replaced by (sometimes excessive) scepticism. Practitioners now understand that models provide a useful way of clarifying our assumptions and deriving their implications, but not a guaranteed path to truth. These lessons will need to be relearned as we deal with AI.

Broad makes a compelling case that AI techniques can obscure human agency but not replace it. Decisions nominally made by AI algorithms inevitably reflect the choices made by their designers. Whether those choices are the result of careful reflection, or of unthinking prejudice, is up to us.

John Quiggin
He is an Australian economist, a Professor and an Australian Research Council Laureate Fellow at the University of Queensland, and a former member of the Board of the Climate Change Authority of the Australian Government.

Leave a Reply

Your email address will not be published. Required fields are marked *