From Lars Syll In Dani Rodrik’s Economics Rules it is argud that ‘the multiplicity of models is economics’ strength,’ and that a science that has a different model for everything is non-problematic, since economic models are cases that come with explicit user’s guides — teaching notes on how to apply them. That’s because they are transparent about their critical assumptions and behavioral mechanisms. Hmm … That is at odds with yours truly’s experience from studying mainstream economic models during four decades. When — just to take an example — criticizing the basic (DSGE) workhorse macroeconomic model for its inability to explain involuntary unemployment, its defenders maintain that later ‘successive approximations’ and elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those ‘solutions,’ is that they are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. These theories and models do not come at all with the transparent and ‘explicit user’s guides.’ And there’s a very obvious reason for that.
Topics:
Lars Pålsson Syll considers the following as important: Uncategorized
This could be interesting, too:
Merijn T. Knibbe writes Argentina bucks the trend. Vitamin A deficiencies are increasing
John Quiggin writes Armistice Day
Editor writes Making America Great Again, 2024
Merijn T. Knibbe writes Völkermord in Gaza. Two million deaths are in the cards.
from Lars Syll
In Dani Rodrik’s Economics Rules it is argud that ‘the multiplicity of models is economics’ strength,’ and that a science that has a different model for everything is non-problematic, since
economic models are cases that come with explicit user’s guides — teaching notes on how to apply them. That’s because they are transparent about their critical assumptions and behavioral mechanisms.
Hmm …
That is at odds with yours truly’s experience from studying mainstream economic models during four decades.
When — just to take an example — criticizing the basic (DSGE) workhorse macroeconomic model for its inability to explain involuntary unemployment, its defenders maintain that later ‘successive approximations’ and elaborations — especially newer search models — manage to do just that. However, one of the more conspicuous problems with those ‘solutions,’ is that they are as a rule constructed without seriously trying to warrant that the model immanent assumptions and results are applicable in the real world. External validity is more or less a non-existent problematique sacrificed on the altar of model derivations. This is not by chance. These theories and models do not come at all with the transparent and ‘explicit user’s guides.’ And there’s a very obvious reason for that. For how could one even imagine to empirically test assumptions such as ‘wages being determined by Nash bargaining’ or ‘actors maximizing expected utility,’ without coming to the conclusion that this is — in terms of realism and relevance — far from ‘good enough’ or ‘close enough’ to real world situations?
Typical mainstream neoclassical modeling assumptions — with or without due pragmatic considerations — can not in any relevant way be considered anything else but imagined model worlds assumptions that has nothing at all to do with the real world we happen to live in.
Here is no real transparency as to the deeper significance and role of the chosen set of axiomatic assumptions.
Here is no explicit user’s guide or indication of how we should be able to, as Rodrik puts it, ‘discriminate’ between the ‘bewildering array of possibilities’ that flow out of such outlandish and known to be false assumptions.
Theoretical models building on piles of known to be false assumptions are in no way close to being scientific explanations. On the contrary. They are untestable and a fortiori totally worthless from the point of view of scientific relevance.
And — as Noah Smith noticed the other day — it certainly isn’t unproblematic to portray having infinitely many models as something laudable:
One thing I still notice about macro … is the continued proliferation of models. Almost every macro paper has a theory section. Because it takes more than one empirical paper to properly test a theory, this means that theories are being created in macro at a far greater rate than they can be tested.
That seems like a problem to me. If you have an infinite collection of models sitting on the shelves, how does theory inform policy? If policy advisers have an endless list of models to choose from, how do they pick which one to use? It seems like a lot of the time it’ll come down to personal preference, intuition, or even ideology …
It seems to me that if you want to make a field truly empirical, you don’t just need to look at data – you need to use data to toss out models, and model elements like the Euler equation … I also think macro people in general could stand to be more proactive about using new data to critically reexamine canonical assumptions … That seems like it’ll raise the chances that the macro consensus gets the next crisis right before it happens, rather than after.