Lars Syll Since there will generally be many micro foundations consistent with some given aggregate pattern, empirical support for an aggregate hypothesis does not constitute empirical support for any particular micro foundation … Lucas himself points out that short-term macroeconomic forecasting models work perfectly well without choice-theoretic foundations: “But if one wants to know how behaviour is likely to change under some change in policy, it is necessary to model the way people make choices” (Snowdon and Vane 2005, interview with Robert Lucas). The question, of course, is why on earth would one insist on deriving policy implications from foundations that deliberately misrepresent actual behavior? Yes, indeed, why would one? Defenders of microfoundations and its rational
Topics:
Lars Pålsson Syll considers the following as important: Uncategorized
This could be interesting, too:
Dean Baker writes Health insurance killing: Economics does have something to say
Lars Pålsson Syll writes Debunking mathematical economics
John Quiggin writes RBA policy is putting all our futures at risk
Merijn T. Knibbe writes ´Extra Unordinarily Persistent Large Otput Gaps´ (EU-PLOGs)
Lars Syll
Since there will generally be many micro foundations consistent with some given aggregate pattern, empirical support for an aggregate hypothesis does not constitute empirical support for any particular micro foundation … Lucas himself points out that short-term macroeconomic forecasting models work perfectly well without choice-theoretic foundations: “But if one wants to know how behaviour is likely to change under some change in policy, it is necessary to model the way people make choices” (Snowdon and Vane 2005, interview with Robert Lucas). The question, of course, is why on earth would one insist on deriving policy implications from foundations that deliberately misrepresent actual behavior?
Yes, indeed, why would one?
Defenders of microfoundations and its rational expectations equipped representative agent’s intertemporal optimization often argue as if sticking with simple representative agent macroeconomic models doesn’t impart a bias to the analysis. Yours truly unequivocally reject that unsubstantiated view.
These defenders often also maintain that there are no methodologically coherent alternatives to microfoundations modelling. That allegation is, of course, difficult to evaluate, substantially hinging on how coherence is defined. But one thing I do know, is that the kind of microfoundationalist macroeconomics that New Classical economists and ‘New Keynesian’ economists are pursuing are not methodologically coherent according to the standard coherence definition (see e. g. here). And that ought to be rather embarrassing for those ilks of macroeconomists to whom axiomatics and deductivity is the hallmark of science tout court.
The fact that Lucas introduced rational expectations as a consistency axiom is not really an argument to why we should accept it as an acceptable assumption in a theory or model purporting to explain real macroeconomic processes (see e. g. here). And although virtually any macroeconomic empirical claim is contestable, so is any claim in micro (see e. g. here).
Using formal mathematical modelling, mainstream economists sure can guarantee that the conclusions hold given the assumptions. However, the validity we get in abstract model-worlds does not warrantly transfer to real-world economies. Validity may be good, but it isn’t enough. From a realist perspective, both relevance and soundness are sine qua non.
In their search for validity, rigour and precision, mainstream macro modellers of various ilks construct microfounded DSGE models that standardly assume rational expectations, Walrasian market clearing, unique equilibria, time invariance, linear separability and homogeneity of both inputs/outputs and technology, infinitely lived intertemporally optimizing representative household/ consumer/producer agents with homothetic and identical preferences, etc., etc. At the same time, the models standardly ignore complexity, diversity, uncertainty, coordination problems, non-market clearing prices, real aggregation problems, emergence, expectations formation, etc., etc.
The predominant strategy in mainstream macroeconomics today is to build models and make things happen in these ‘analogue-economy models.’ But although macro-econometrics may have supplied economists with rigorous replicas of real economies, if the goal of theory is to be able to make accurate forecasts or explain what happens in real economies, this ability to — ad nauseam — construct toy models, does not give much leverage.
‘Rigorous’ and ‘precise’ New Classical models — and that goes for the ‘New Keynesian’ variety too — cannot be considered anything else than unsubstantiated conjectures as long as they aren’t supported by evidence from outside the theory or model. To my knowledge no in any way decisive empirical evidence has been presented.
And — applying a ‘Lucas critique’ on New Classical and ‘New Keynesian’ models, it is obvious that they too fail.
Changing ‘policy rules’ cannot just be presumed not to influence investment and consumption behaviour and a fortiori technology, thereby contradicting the invariance assumption. Technology and tastes cannot live up to the status of an economy’s deep and structurally stable Holy Grail. They too are part and parcel of an ever-changing and open economy. Lucas hope of being able to model the economy as ‘a FORTRAN program’ and ‘gain some confidence that the component parts of the program are in some sense reliable prior to running it’ therefore seems – from an ontological point of view – totally misdirected. The failure in the attempt to anchor the analysis in the alleged stable deep parameters ‘tastes’ and ‘technology’ shows that if you neglect ontological considerations pertaining to the target system, ultimately reality gets its revenge when at last questions of bridging and exportation of model exercises are laid on the table.
Mainstream economists are proud of having an ever-growing smorgasbord of models to cherry-pick from (as long as, of course, the models do not question the standard modelling strategy) when performing their analyses. The ‘rigorous’ and ‘precise’ deductions made in these closed models, however, are not in any way matched by a similar stringency or precision when it comes to what ought to be the most important stage of any research — making statements and explaining things in real economies. Although almost every mainstream economist holds the view that thought-experimental modelling has to be followed by confronting the models with reality — which is what they indirectly want to predict/explain/understand using their models — they all of a sudden become exceedingly vague and imprecise. It is as if all the intellectual force has been invested in the modelling stage and nothing is left for what really matters — what exactly do these models teach us about real economies.
No matter how precise and rigorous the analysis, and no matter how hard one tries to cast the argument in modern mathematical form, they do not push economic science forwards one single millimetre if they do not stand the acid test of relevance to the target. No matter how clear, precise, rigorous or certain the inferences delivered inside these models are, they do not per se say anything about real-world economies.
Proving things ‘rigorously’ in mathematical models is at most a starting point for doing an interesting and relevant economic analysis. Forgetting to supply export warrants to the real world makes the analysis an empty exercise in formalism without real scientific value.