Lucas’ Copernican revolution — nonsense on stilts In Michel De Vroey’s version of the history of macroeconomics, Robert Lucas’ declaration of the need for macroeconomics to be pursued only within ‘equilibrium discipline’ and declaring equilibrium to exist as a postulate, is hailed as a ‘Copernican revolution.’ Equilibrium is not to be considered something that characterises real economies, but rather ‘a property of the way we look at things.’ De Vroey — approvingly — notices that this — as well as Lucas’ banning of disequilibrium as referring to ‘unintelligible behaviour’ — ‘amounts to shrinking the pretence of equilibrium theory.’ Mirabile dictu! Is it really a feasible methodology for economists to make a sharp divide between theory and reality, and then — like De Vroey and Lucas — treat the divide as something recommendable and good? I think not. Fortunately there are other economists with a less devoted hagiographic attitude towards Lucas and his nonsense on stilts. Alessandro Vercelli is one: The equilibria analysed by Lucas are conceived as stationary stochastic processes.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Robert Skidelsky writes Speech in the House of Lords – Autumn Budget 2024
Lars Pålsson Syll writes Modern monetär teori
Lars Pålsson Syll writes Problemen med Riksbankens oberoende
Lars Pålsson Syll writes L’ascenseur social est en panne
Lucas’ Copernican revolution — nonsense on stilts
In Michel De Vroey’s version of the history of macroeconomics, Robert Lucas’ declaration of the need for macroeconomics to be pursued only within ‘equilibrium discipline’ and declaring equilibrium to exist as a postulate, is hailed as a ‘Copernican revolution.’ Equilibrium is not to be considered something that characterises real economies, but rather ‘a property of the way we look at things.’ De Vroey — approvingly — notices that this — as well as Lucas’ banning of disequilibrium as referring to ‘unintelligible behaviour’ — ‘amounts to shrinking the pretence of equilibrium theory.’
Mirabile dictu!
Is it really a feasible methodology for economists to make a sharp divide between theory and reality, and then — like De Vroey and Lucas — treat the divide as something recommendable and good? I think not.
Fortunately there are other economists with a less devoted hagiographic attitude towards Lucas and his nonsense on stilts.
Alessandro Vercelli is one:
The equilibria analysed by Lucas are conceived as stationary stochastic processes. The fact that they are stationary imposes a long series of restrictive hypotheses on the range of applicability of the heuristic model, and these considerably reduce the empirical usefulness of Lucas’s equlibrium method …
For such a method to make sense … the stationary ‘equilibrium’ stochastic process must also be ‘dynamically stable,’ or ‘ergodic,’ in the terminology of stochastic processes …
What is worse, if one adopts Lucas’s method of pure equilibrium implying the non-intelligibility of disequilibrium positions, there is no way to argue about the robustness of the alternative equilibria under consideration. In other words, Lucas’s heuristic model, not to mention the analytical models built according to his instructions, prove to be useless for the very purpose for which they were primarily constructed — the evaluation of alternative economic policies.
Another one is Roman Freedman, Professor of Economics at New York University and a long time critic of the rational expectations hypothesis. In his seminal 1982 American Economic Review article Towards an Understanding of Market Processes: Individual Expectations, Learning, and Convergence to Rational Expectations Equilibrium — an absolute must-read for anyone with a serious interest in understanding what are the issues in the present discussion on rational expectations as a modeling assumption — he showed that the kind of models that Lucas recommends — models founded on ‘equilibrium discipline’ and the rational expectations hypothesis — are inadequate as representation of economic agents’ decision making.
Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies and institutions are those based on rational expectations and representative actors. As yours truly has tried to show in On the use and misuse of theories and models in economics there is really no support for this conviction at all. For if this microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand waving that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.
In one of their latest books on rational expectations, Roman Frydman and his colleague Michael Goldberg write:
The belief in the scientific stature of fully predetermined models, and in the adequacy of the Rational Expectations Hypothesis to portray how rational individuals think about the future, extends well beyond asset markets. Some economists go as far as to argue that the logical consistency that obtains when this hypothesis is imposed in fully predetermined models is a precondition of the ability of economic analysis to portray rationality and truth.
For example, in a well-known article published in The New York Times Magazine in September 2009, Paul Krugman (2009, p. 36) argued that Chicago-school free-market theorists “mistook beauty . . . for truth.” One of the leading Chicago economists, John Cochrane (2009, p. 4), responded that “logical consistency and plausible foundations are indeed ‘beautiful’ but to me they are also basic preconditions for ‘truth.’” Of course, what Cochrane meant by plausible foundations were fully predetermined Rational Expectations models. But, given the fundamental flaws of fully predetermined models, focusing on their logical consistency or inconsistency, let alone that of the Rational Expectations Hypothesis itself, can hardly be considered relevant to a discussion of the basic preconditions for truth in economic analysis, whatever “truth” might mean.
There is an irony in the debate between Krugman and Cochrane. Although the New Keynesian and behavioral models, which Krugman favors, differ in terms of their specific assumptions, they are every bit as mechanical as those of the Chicago orthodoxy. Moreover, these approaches presume that the Rational Expectations Hypothesis provides the standard by which to define rationality and irrationality.
…
In fact, the Rational Expectations Hypothesis requires no assumptions about the intelligence of market participants whatsoever … Rather than imputing superhuman cognitive and computational abilities to individuals, the hypothesis presumes just the opposite: market participants forgo using whatever cognitive abilities they do have. The Rational Expectations Hypothesis supposes that individuals do not engage actively and creatively in revising the way they think about the future. Instead, they are presumed to adhere steadfastly to a single mechanical forecasting strategy at all times and in all circumstances. Thus, contrary to widespread belief, in the context of real-world markets, the Rational Expectations Hypothesis has no connection to how even minimally reasonable profit-seeking individuals forecast the future in real-world markets. When new relationships begin driving asset prices, they supposedly look the other way, and thus either abjure profit-seeking behavior altogether or forgo profit opportunities that are in plain sight.
And in a recent article the same authors write:
Contemporary economists’ reliance on mechanical rules to understand – and influence – economic outcomes extends to macroeconomic policy as well, and often draws on an authority, John Maynard Keynes, who would have rejected their approach. Keynes understood early on the fallacy of applying such mechanical rules. “We have involved ourselves in a colossal muddle,” he warned, “having blundered in the control of a delicate machine, the working of which we do not understand.”
In The General Theory of Employment, Interest, and Money, Keynes sought to provide the missing rationale for relying on expansionary fiscal policy to steer advanced capitalist economies out of the Great Depression. But, following World War II, his successors developed a much more ambitious agenda. Instead of pursuing measures to counter excessive fluctuations in economic activity, such as the deep contraction of the 1930’s, so-called stabilization policies focused on measures that aimed to maintain full employment. “New Keynesian” models underpinning these policies assumed that an economy’s “true” potential – and thus the so-called output gap that expansionary policy is supposed to fill to attain full employment – can be precisely measured.
But, to put it bluntly, the belief that an economist can fully specify in advance how aggregate outcomes – and thus the potential level of economic activity – unfold over time is bogus …
The real macroeconomic challenge is to accept uncertainty and still try to explain why economic transactions take place – instead of simply conjuring the problem away à la Lucas by assuming equilibrium and rational expectations, and treating uncertainty as if it was possible to reduce to stochastic risk. That is scientific cheating. And it has been going on for too long now.