How to use models in economics The reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear. In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain. Paul Krugman Hmm … So when Krugman and
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten
Peter Radford writes AJR, Nobel, and prompt engineering
Lars Pålsson Syll writes MMT explained
Lars Pålsson Syll writes Statens finanser funkar inte som du tror
How to use models in economics
The reason you study an issue at all is usually that you care about it, that there’s something you want to achieve or see happen. Motivation is always there; the trick is to do all you can to avoid motivated reasoning that validates what you want to hear.
In my experience, modeling is a helpful tool (among others) in avoiding that trap, in being self-aware when you’re starting to let your desired conclusions dictate your analysis. Why? Because when you try to write down a model, it often seems to lead some place you weren’t expecting or wanting to go. And if you catch yourself fiddling with the model to get something else out of it, that should set off a little alarm in your brain.
Hmm …
So when Krugman and other ‘modern’ mainstream economists use their models — standardly assuming rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative agents with homothetic and identical preferences, etc. — and standardly ignoring complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc. — we are supposed to believe that this somehow helps them ‘to avoid motivated reasoning that validates what you want to hear.’
Yours truly is, to say the least, far from convinced. The alarm that sets off in my brain is that this, rather than being helpful for understanding real-world economic issues, sounds more like an ill-advised plaidoyer for voluntarily taking on a methodological straight-jacket of unsubstantiated and known to be false assumptions.
Modern (expected) utility theory is a good example of this. Leaving the specification of preferences without almost any restrictions whatsoever, every imaginable evidence is safely made compatible with the all-embracing ‘theory’ — and theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may, of course, be very ‘handy’, but totally void of any empirical value.
Utility theory has like so many other economic theories morphed into an empty theory of everything. And a theory of everything explains nothing — just as Gary Becker’s ‘economics of everything’ it only makes nonsense out of economic science.
Using false assumptions, mainstream modellers can derive whatever conclusions they want. Wanting to show that ‘all economists consider austerity to be the right policy,’ just e.g. assume ‘all economists are from Chicago’ and ‘all economists from Chicago consider austerity to be the right policy.’ The conclusions follow by deduction — but is of course factually wrong. Models and theories building on that kind of reasoning is nothing but a pointless waste of time.
Mainstream economics today is mainly an approach in which you think the goal is to be able to write down a set of empirically untested assumptions and then deductively infer conclusions from them. When applying this deductivist thinking to economics, economists usually set up ‘as if’ models based on a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. The snag is that if the models are to be relevant, we also have to argue that their precision and rigour still holds when they are applied to real-world situations. They often don’t do for the simple reason that empty theoretical exercises of this kind do not tell us anything about the world. When addressing real economies, the idealizations necessary for the deductivist machinery to work, simply don’t hold.
So how should we evaluate the search for ever-greater precision and the concomitant arsenal of mathematical and formalist models? To a large extent, the answer hinges on what we want our models to perform and how we basically understand the world.
The world as we know it has limited scope for certainty and perfect knowledge. Its intrinsic and almost unlimited complexity and the interrelatedness of its parts prevent the possibility of treating it as constituted by atoms with discretely distinct, separable and stable causal relations. Our knowledge accordingly has to be of a rather fallible kind. To search for deductive precision and rigour in such a world is self-defeating. The only way to defend such an endeavour is to restrict oneself to prove things in closed model-worlds. Why we should care about these and not ask questions of relevance is hard to see.