Mainstream economics — a pointless waste of time Paul Krugman has a piece up on his blog arguing that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues: You might say that the way to go about research is to approach issues with a pure heart and mind: seek the truth, and derive any policy conclusions afterwards. But that, I suspect, is rarely how things work. After all, the reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear. In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Klas Eklunds ‘Vår ekonomi’ — lärobok med stora brister
Lars Pålsson Syll writes Ekonomisk politik och finanspolitiska ramverk
Lars Pålsson Syll writes NAIRU — a harmful fairy tale
Lars Pålsson Syll writes Isabella Weber on sellers inflation
Mainstream economics — a pointless waste of time
Paul Krugman has a piece up on his blog arguing that the ‘discipline of modeling’ is a sine qua non for tackling politically and emotionally charged economic issues:
You might say that the way to go about research is to approach issues with a pure heart and mind: seek the truth, and derive any policy conclusions afterwards. But that, I suspect, is rarely how things work. After all, the reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear.
In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.
Hmm …
So when Krugman and other ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’
Yours truly is, to say the least, far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.
Let me just give two examples to illustrate my point.
In 1817 David Ricardo presented — in Principles — a theory that was meant to explain why countries trade and, based on the concept of opportunity cost, how the pattern of export and import is ruled by countries exporting goods in which they have comparative advantage and importing goods in which they have a comparative disadvantage.
Ricardo’s theory of comparative advantage, however, didn’t explain why the comparative advantage was the way it was. In the beginning of the 20th century, two Swedish economists — Eli Heckscher and Bertil Ohlin — presented a theory/model/theorem according to which the comparative advantages arose from differences in factor endowments between countries. Countries have a comparative advantages in producing goods that use up production factors that are most abundant in the different countries. Countries would a fortiori mostly export goods that used the abundant factors of production and import goods that mostly used factors of productions that were scarce.
The Heckscher-Ohlin theorem –as do the elaborations on in it by e.g. Vanek, Stolper and Samuelson — builds on a series of restrictive and unrealistic assumptions. The most critically important — beside the standard market clearing equilibrium assumptions — are
(1) Countries use identical production technologies.
(2) Production takes place with a constant returns to scale technology.
(3) Within countries the factor substitutability is more or less infinite.
(4) Factor-prices are equalised (the Stolper-Samuelson extension of the theorem).
These assumptions are, as almost all empirical testing of the theorem has shown, totally unrealistic. That is, they are empirically false.
That said, one could indeed wonder why on earth anyone should be interested in applying this theorem to real world situations. As so many other mainstream mathematical models taught to economics students today, this theorem has very little to do with the real world.
From a methodological point of view one can, of course, also wonder, how we are supposed to evaluate tests of a theorem building on known to be false assumptions. What is the point of such tests? What can those tests possibly teach us? From falsehoods anything logically follows.
Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.
Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.
Some people have trouble with the fact that from allowing false assumptions mainstream economists can generate whatever conclusions they want in their models.
But that’s really nothing very deep or controversial. What I’m referring to is the well-known ‘principle of explosion,’ according to which if both a statement and its negation are considered true, any statement whatsoever can be inferred.
Whilst tautologies, purely existential statements and other nonfalsifiable statements assert, as it were, too little about the class of possible basic statements, self-contradictory statements assert too much. From a self-contradictory statement, any statement whatsoever can be validly deduced. Consequently, the class of its potential falsifiers is identical with that of all possible basic statements: it is falsified by any statement whatsoever.
On the question of tautology, I think it is only fair to say that the way axioms and theorems are formulated in mainstream (neoclassical) economics, they are often made tautological and informationally totally empty.
Using false assumptions, mainstream modelers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’ The conclusion follows by deduction — but is of course factually totally wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.