Wednesday , December 18 2024
Home / Lars P. Syll / Mainstream economics — an explanatory disaster

Mainstream economics — an explanatory disaster

Summary:
Mainstream economics — an explanatory disaster To achieve explanatory success, a theory should, minimally, satisfy two criteria: it should have determinate implications for behavior, and the implied behavior should be what we actually observe. These are necessary conditions, not sufficient ones. Rational-choice theory often fails on both counts. The theory may be indeterminate, and people may be irrational. In what was perhaps the first sustained criticism of the theory, Keynes emphasized indeterminacy, notably because of the pervasive presence of uncertainty … Disregarding some more technical sources of indeterminacy, the most basic one is embarrassingly simple: how can one impute to the social agents the capacity to make the calculations that occupy

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Merijn T. Knibbe writes ´Extra Unordinarily Persistent Large Otput Gaps´ (EU-PLOGs)

Peter Radford writes The Geology of Economics?

Lars Pålsson Syll writes Årets ‘Nobelpris’ i ekonomi — gammal skåpmat!

Lars Pålsson Syll writes Germany’s ‘debt brake’ — a ridiculously bad idea

Mainstream economics — an explanatory disaster

To achieve explanatory success, a theory should, minimally, satisfy two criteria: it should have determinate implications for behavior, and the implied behavior should be what we actually observe. These are necessary conditions, not sufficient ones. Rational-choice theory often fails on both counts. The theory may be indeterminate, and people may be irrational. Mainstream economics — an explanatory disasterIn what was perhaps the first sustained criticism of the theory, Keynes emphasized indeterminacy, notably because of the pervasive presence of uncertainty …

Disregarding some more technical sources of indeterminacy, the most basic one is embarrassingly simple: how can one impute to the social agents the capacity to make the calculations that occupy many pages of mathematical appendixes in the leading journals of economics and political science and that can be acquired only through years of professional training?

I believe that much work in economics and political science that is inspired by rational-choice theory is devoid of any explanatory, aesthetic or mathematical interest, which means that it has no value at all. I cannot make a quantitative assessment of the proportion of work in leading journals that fall in this category, but I am confident that it represents waste on a staggering scale.

Jon Elster

Most mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.

The procedure is analytical. The whole is broken down into its constituent parts to be able to explain (reduce) the aggregate (macro) as the result of the interaction of its parts (micro). Building their economic models, modern mainstream economists ground their models on a set of core assumptions describing the agents as ‘rational’ actors and a set of auxiliary assumptions. Together these assumptions make up the base model of all mainstream economic models. Based on these two sets of assumptions, they try to explain and predict both individual and social phenomena.

The core assumptions typically consist of completeness, transitivity, non-satiation, expected utility maximization, and consistent efficiency equilibria.

When describing the actors as rational in these models, the concept of rationality used is instrumental rationality — choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given interests and goals. How these preferences, interests, and goals are formed is not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.

The picture given by this set of core assumptions — ‘rational choice’ — is a rational agent with strong cognitive capacity that knows what alternatives she is facing, evaluates them carefully, calculates the consequences and chooses the one — given her preferences — that she believes has the best consequences according to her. Weighing the different alternatives against each other, the actor makes a consistent optimizing choice and acts accordingly.

Besides the core assumptions the model also typically has a set of auxiliary assumptions that spatio-temporally specify the kind of social interaction between ‘rational’ actors that take place in the model. These assumptions can be seen as giving answers to questions such as: who are the actors and where and when do they act; which specific goals do they have; what are their interests; what kind of expectations do they have; what are their feasible actions; what kind of agreements (contracts) can they enter into; how much and what kind of information do they possess; and how do the actions of the different individuals interact with each other.

So, the base model basically consists of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making the auxiliary assumptions serve as a kind of restriction of the intended domain of application for the core assumptions and the deductively derived theorems). The list of assumptions can never be complete since there will always be unspecified background assumptions and some (often) silent omissions (usually based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.

These models are not primarily constructed for being able to analyze individuals and their aspirations, motivations, interests, etc., but typically for analyzing social phenomena as a kind of equilibrium that emerges through the interaction between individuals.

Now, of course, no one takes the base model (and the models that build on it) as a good (or, even less, true) representation of reality (which would demand a high degree of appropriate conformity with the essential characteristics of the real phenomena, that, even when weighing in pragmatic aspects such as ‘purpose’ and ‘adequacy,’ it is hard to see that this ‘thin’ model could deliver). The model is typically seen as a kind of thought experimental ‘as if’ bench-mark device for enabling a rigorous mathematically tractable illustration of social interaction in an ideal-type model world, and to be able to compare that ‘ideal’ with reality. The ‘interpreted’ model is supposed to supply analytical and explanatory power, enabling us to detect and understand mechanisms and tendencies in what happens around us in real economies.

Based on the model — and on interpreting it as something more than a deductive-axiomatic system — predictions and explanations can be made and confronted with empirical data and what we think we know. The base model and its more or less tightly knit axiomatic core assumptions are used to set up further ‘as if’ models from which consistent and precise inferences are made. If the axiomatic premises are true, the conclusions necessarily follow. But if the models are to be relevant, we also have to argue that their precision and rigour still hold when they are applied to real-world situations. They usually do not. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold.

If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? The logic of idealization, that permeates the base model, is a marvellous tool in mathematics and axiomatic-deductivist systems, but a poor guide for action in real-world systems, where concepts and entities are without clear boundaries and continually interact and overlap.

Being told that the model is rigorous and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly questionable and extremely difficult to test. Being able to construct ‘thought experiments’ depicting logical possibilities does not take us very far. An obvious problem with the mainstream base model is that it is formulated in such a way that it is extremely difficult to empirically test and decisively ‘corroborate’ or ‘falsify.’

As Elster writes, such models have — from an explanatory point of view — indeed “no value at all.” The ‘thinness’ is bought at too high a price, unless you decide to leave the intended area of application unspecified or immunize your model by interpreting it as nothing more than two sets of assumptions making up a content-less theoretical system with no connection whatsoever to reality.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *