Saturday , April 20 2024
Home / Lars P. Syll / Economic modelling

Economic modelling

Summary:
A couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues: In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain. So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Cutting-edge macroeconomics …

Lars Pålsson Syll writes Tourism — a critical perspective

Lars Pålsson Syll writes Eurofanatiker försöker köra över folket igen

Lars Pålsson Syll writes Spekulationsbubblor

A couple of years ago, Paul Krugman had a piece up on his blog arguing that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues:

Economic modellingIn my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.

So when ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’

Yours truly is, to say the least, far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real-world economic issues, is more of an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.

Let me just give two examples to illustrate my point.

In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have a comparative advantage and importing goods in which they have a comparative disadvantage.

Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. At the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would a fortiori mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.

The Heckscher-Ohlin theorem — as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — besides the standard market-clearing equilibrium assumptions — are

(1) Countries use identical production technologies.

(2) Production takes place with constant returns to scale technology.

(3) Within countries the factor substitutability is more or less infinite.

(4) Factor prices are equalised (the Stolper-Samuelson extension of the theorem).

These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false. 

That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real-world situations. Like so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.

From a methodological point of view, one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods, anything logically follows.

Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.

Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.

Some people have trouble with the fact that by allowing false assumptions mainstream economists can generate whatever conclusions they want in their models. But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.

Economic modellingWhilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.

On the question of tautology, I think it is only fair to say that the way axioms and theorems are formulated in mainstream (neoclassical) economics, they are often made tautological and informationally totally empty.

Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’  The conclusions follow by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning are nothing but a pointless waste of time.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *