Wednesday , November 6 2024
Home / Lars P. Syll / DSGE modeling – a statistical critique

DSGE modeling – a statistical critique

Summary:
DSGE modeling – a statistical critique As Paul Romer’s recent assault on ‘post-real’ macroeconomics showed, yours truly is not the only one that questions the validity and relevance of DSGE modeling. After having read one of my posts on the issue, eminent statistician Aris Spanos kindly sent me a working paper where he discusses the validity of DSGE models and shows that the calibrated structural models are often at odds with observed data, and that many of the ‘deep parameters’ used are not even identifiable. Interesting reading. And confirming, once again, that DSGE models do not marry particularly well with real world data. This should come as no surprise — microfounded general equilibrium modeling with inter-temporally optimizing representative agents seldom do. This paper brings out several weaknesses of the traditional DSGE modeling, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference with a view to delineate the above mentioned problems and suggest solutions.

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes What statistics teachers get wrong!

Lars Pålsson Syll writes Statistical uncertainty

Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics

Lars Pålsson Syll writes Interpreting confidence intervals

DSGE modeling – a statistical critique

As Paul Romer’s recent assault on ‘post-real’ macroeconomics showed, yours truly is not the only one that questions the validity and relevance of DSGE modeling. After having read one of my posts on the issue, eminent statistician Aris Spanos kindly sent me a working paper where he discusses the validity of DSGE models and shows that the calibrated structural models are often at odds with observed data, and that many of the ‘deep parameters’ used are not even identifiable.

Interesting reading. And confirming, once again, that DSGE models do not marry particularly well with real world data. This should come as no surprise — microfounded general equilibrium modeling with inter-temporally optimizing representative agents seldom do.

DSGE modeling – a statistical critiqueThis paper brings out several weaknesses of the traditional DSGE modeling, including statistical misspecification, non-identification of deep parameters, substantive inadequacy, weak forecasting performance and potentially misleading policy analysis. It is argued that most of these weaknesses stem from failing to distinguish between statistical and substantive adequacy and secure the former before assessing the latter. The paper untangles the statistical from the substantive premises of inference with a view to delineate the above mentioned problems and suggest solutions. The critical appraisal is based on the Smets and Wouters (2007) DSGE model using US quarterly data. It is shown that this model is statistically misspecified …

Lucas’s (1980) argument: “Any model that is well enough articulated to give clear answers to the questions we put to it will necessarily be artificial, abstract, patently ‘unreal’” (p. 696), is misleading because it blurs the distinction between substantive and statistical adequacy. There is nothing wrong with constructing a simple, abstract and idealised theory model aiming to capture key features of the phenomenon of interest, with a view to shed light on (understand, explain, forecast) economic phenomena of interest, as well as gain insight concerning alternative policies. Unreliability of inference problems arise when the statistical model implicitly specified by the theory model is statistically misspecified, and no attempt is made to reliably assess whether the theory model does, indeed, capture the key features of the phenomenon of interest; see Spanos (2009a). As argued by Hendry (2009):

“This implication is not a tract for mindless modeling of data in the absence of eco- nomic analysis, but instead suggests formulating more general initial models that embed the available economic theory as a special case, consistent with our knowledge of the institutional framework, historical record, and the data properties … Applied econometrics cannot be conducted without an economic theoretical framework to guide its endevous and help interpret its findings. Nevertheless, since economic theory is not complete, correct, and immutable, and never will be, one also cannot justify an insistence on deriving empirical models from theory alone.” (p. 56-7)

Statistical misspecification is not the inevitable result of abstraction and simplification, but it stems from imposing invalid probabilistic assumptions on the data.

Niraj Poudyal & Aris Spanos

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *