From Lars Syll The standard NK [New Keynesian] model, like most of its predecessors in the RBC literature, represents an economy inhabited by an infinitely-lived representative household. That assumption, while obviously unrealistic, may be justified by the belief that, like so many other aspects of reality, the finiteness of life and the observed heterogeneity of individuals along many dimensions … can be safely ignored for the purposes of explaining aggregate fluctuations and their interaction with monetary policy, with the consequent advantages in terms of tractability … There is a sense in which none of the extensions of the NK model described above can capture an important aspect of most financial crises, namely, a gradual build-up of financial imbalances leading to an eventual
Topics:
Lars Pålsson Syll considers the following as important: Uncategorized
This could be interesting, too:
Merijn T. Knibbe writes Christmas thoughts about counting the dead in zones of armed conflict.
Lars Pålsson Syll writes Mainstream distribution myths
Dean Baker writes Health insurance killing: Economics does have something to say
Lars Pålsson Syll writes Debunking mathematical economics
from Lars Syll
The standard NK [New Keynesian] model, like most of its predecessors in the RBC literature, represents an economy inhabited by an infinitely-lived representative household. That assumption, while obviously unrealistic, may be justified by the belief that, like so many other aspects of reality, the finiteness of life and the observed heterogeneity of individuals along many dimensions … can be safely ignored for the purposes of explaining aggregate fluctuations and their interaction with monetary policy, with the consequent advantages in terms of tractability …
There is a sense in which none of the extensions of the NK model described above can capture an important aspect of most financial crises, namely, a gradual build-up of financial imbalances leading to an eventual “crash” characterized by defaults, sudden-stops of credit flows, asset price declines, and a large contraction in aggregate demand, output and employment. By contrast, most of the models considered above share with their predecessors a focus on equilibria that take the form of stationary fluctuations driven by exogenous shocks. This is also the case in variants of those models that allow for financial frictions of different kinds and which have become quite popular as a result of the financial crisis … The introduction of financial frictions in those models often leads to an amplification of the effects of non-financial shocks. It also makes room for additional sources of fluctuations related to the presence of financial frictions … or exogenous changes in the tightness of borrowing constraints … Most attempts to use a version of the NK models to explain the “financial crisis,” however, end up relying on a large exogenous shock that impinges on the economy unexpectedly, triggering a large recession, possibly amplified by a financial accelerator mechanism embedded in the model. It is not obvious what the empirical counterpart to such an exogenous shock is.
Gali’s presentation sure raises important questions that serious economists ought to ask themselves. Using ‘simplifying’ tractability assumptions such as “infinitely-lived representative household,” rational expectations, common knowledge, additivity, ergodicity, etc — because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt — ‘New Keynesian’ or not — economists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. If economists do not come up with any other arguments for their chosen modelling strategy than Gali’s “advantages in terms of tractability,” it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.
Gali’s article underlines that the essence of mainstream economic theory is its almost exclusive use of a deductivist methodology. A methodology that is more or less used without a smack of argument to justify its relevance.
The theories and models that mainstream economists construct describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.
Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in economic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery to deliver “precise” and “rigorous” results. This, of course, makes exporting to real-world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real-world target systems do not take us very far unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to an open real-world target system.
Henry Louis Mencken once wrote that “there is always an easy solution to every human problem – neat, plausible and wrong.” And mainstream economics has indeed been wrong. Very wrong. Its main result, so far, has been to demonstrate the futility of trying to build a satisfactory bridge between formalistic-axiomatic deductivist models and real-world d target systems. Assuming, for example, perfect knowledge, instant market clearing and approximating aggregate behaviour with unrealistically heroic assumptions of “infinitely-lived” representative actors, just will not do. The assumptions made, surreptitiously eliminate the very phenomena we want to study: uncertainty, disequilibrium, structural instability and problems of aggregation and coordination between different individuals and groups.
The punch line is that most of the problems that mainstream economics is wrestling with, issues from its attempts at formalistic modelling per se of social phenomena. If scientific progress in economics – as Robert Lucas and other latter days mainstream economists seem to think – lies in our ability to tell “better and better stories” without considering the realm of imagination and ideas a retreat from real-world target systems reality, one would, of course, think our economics journal being filled with articles supporting the stories with empirical evidence. However, I would argue that the journals still show a striking and embarrassing paucity of empirical studies that (try to) substantiate these theoretical claims. Equally amazing is how little one has to say about the relationship between the model and real-world target systems. It is as though thinking explicit discussion, argumentation and justification on the subject not required. Mainstream economic theory is obviously navigating in dire straits.
If the ultimate criterion for success of a deductivist system is to what extent it predicts and cohere with (parts of) reality, modern mainstream economics seems to be a hopeless misallocation of scientific resources. To focus scientific endeavours on proving things in models is a gross misapprehension of what an economic theory ought to be about. Deductivist models and methods disconnected from reality are not relevant to predict, explain or understand real-world economic target systems. These systems do not conform to the restricted closed-system structure the mainstream modelling strategy presupposes.
Mainstream economic theory still today consists mainly of investigating economic models. It has since long given up on the real world and contents itself with proving things about thought up worlds. Empirical evidence still only plays a minor role in mainstream economic theory, where models largely function as substitutes for empirical evidence.
What is wrong with mainstream economics is not that it employs models per se, but that it employs poor models. They are poor because they do not bridge to the real world target system in which we live. Hopefully humbled by the manifest failure of its theoretical pretences, the one-sided, almost religious, insistence on mathematical deductivist modelling as the only scientific activity worthy of pursuing in economics will give way to methodological pluralism based on ontological considerations rather than “consequent advantages in terms of tractability.”