Larry Summers — ‘New Keynesian’ economics needs to be replaced Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent, this is true of the dynamic stochastic general equilibrium (DSGE) models that are the workhorse of central bank staffs and much practically oriented academic work. Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations, not on the level of output. This assumption is problematic at a number of levels … The problem has always been that it is difficult to beat something with nothing. This may be changing as topics like hysteresis, secular stagnation, and multiple equilibrium are getting more and more attention … As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world. Maybe we can call it the Keynesian New Economics. Lawrence Summers Mainstream macroeconomics is stuck with crazy models — and ‘New Keynesian’ macroeconomics and DSGE models certainly, as Summers puts it, “essentially abstract away from most of what is important in macroeconomics. ” Let me just give one example.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten
Peter Radford writes AJR, Nobel, and prompt engineering
Lars Pålsson Syll writes MMT explained
Lars Pålsson Syll writes Statens finanser funkar inte som du tror
Larry Summers — ‘New Keynesian’ economics needs to be replaced
Standard new Keynesian macroeconomics essentially abstracts away from most of what is important in macroeconomics. To an even greater extent, this is true of the dynamic stochastic general equilibrium (DSGE) models that are the workhorse of central bank staffs and much practically oriented academic work.
Why? New Keynesian models imply that stabilization policies cannot affect the average level of output over time and that the only effect policy can have is on the amplitude of economic fluctuations, not on the level of output. This assumption is problematic at a number of levels …
The problem has always been that it is difficult to beat something with nothing. This may be changing as topics like hysteresis, secular stagnation, and multiple equilibrium are getting more and more attention …
As macroeconomics was transformed in response to the Depression of the 1930s and the inflation of the 1970s, another 40 years later it should again be transformed in response to stagnation in the industrial world.
Maybe we can call it the Keynesian New Economics.
Mainstream macroeconomics is stuck with crazy models — and ‘New Keynesian’ macroeconomics and DSGE models certainly, as Summers puts it, “essentially abstract away from most of what is important in macroeconomics. ”
Let me just give one example.
A lot of mainstream economists out there still think that price and wage rigidities are the prime movers behind unemployment. What is even worse — I’m totally gobsmacked every time I come across this utterly ridiculous misapprehension — is that some of them even think that these rigidities are the reason John Maynard Keynes gave for the high unemployment of the Great Depression. This is of course pure nonsense. For although Keynes in General Theory devoted substantial attention to the subject of wage and price rigidities, he certainly did not hold this view.
Since unions/workers, contrary to classical assumptions, make wage-bargains in nominal terms, they will – according to Keynes – accept lower real wages caused by higher prices, but resist lower real wages caused by lower nominal wages. However, Keynes held it incorrect to attribute “cyclical” unemployment to this diversified agent behaviour. During the depression money wages fell significantly and – as Keynes noted – unemployment still grew. Thus, even when nominal wages are lowered, they do not generally lower unemployment.
In any specific labour market, lower wages could, of course, raise the demand for labour. But a general reduction in money wages would leave real wages more or less unchanged. The reasoning of the classical economists was, according to Keynes, a flagrant example of the “fallacy of composition.” Assuming that since unions/workers in a specific labour market could negotiate real wage reductions via lowering nominal wages, unions/workers in general could do the same, the classics confused micro with macro.
Lowering nominal wages could not – according to Keynes – clear the labour market. Lowering wages – and possibly prices – could, perhaps, lower interest rates and increase investment. But to Keynes it would be much easier to achieve that effect by increasing the money supply. In any case, wage reductions was not seen by Keynes as a general substitute for an expansionary monetary or fiscal policy.
Even if potentially positive impacts of lowering wages exist, there are also more heavily weighing negative impacts – management-union relations deteriorating, expectations of on-going lowering of wages causing delay of investments, debt deflation et cetera.
So, what Keynes actually did argue in General Theory, was that the classical proposition that lowering wages would lower unemployment and ultimately take economies out of depressions, was ill-founded and basically wrong.
To Keynes, flexible wages would only make things worse by leading to erratic price-fluctuations. The basic explanation for unemployment is insufficient aggregate demand, and that is mostly determined outside the labor market.
The classical school [maintains that] while the demand for labour at the existing money-wage may be satisfied before everyone willing to work at this wage is employed, this situation is due to an open or tacit agreement amongst workers not to work for less, and that if labour as a whole would agree to a reduction of money-wages more employment would be forthcoming. If this is the case, such unemployment, though apparently involuntary, is not strictly so, and ought to be included under the above category of ‘voluntary’ unemployment due to the effects of collective bargaining, etc …
The classical theory … is best regarded as a theory of distribution in conditions of full employment. So long as the classical postulates hold good, unemploy-ment, which is in the above sense involuntary, cannot occur. Apparent unemployment must, therefore, be the result either of temporary loss of work of the ‘between jobs’ type or of intermittent demand for highly specialised resources or of the effect of a trade union ‘closed shop’ on the employment of free labour. Thus writers in the classical tradition, overlooking the special assumption underlying their theory, have been driven inevitably to the conclusion, perfectly logical on their assumption, that apparent unemployment (apart from the admitted exceptions) must be due at bottom to a refusal by the unemployed factors to accept a reward which corresponds to their marginal productivity …Obviously, however, if the classical theory is only applicable to the case of full employment, it is fallacious to apply it to the problems of involuntary unemployment – if there be such a thing (and who will deny it?). The classical theorists resemble Euclidean geometers in a non-Euclidean world who, discovering that in experience straight lines apparently parallel often meet, rebuke the lines for not keeping straight – as the only remedy for the unfortunate collisions which are occurring. Yet, in truth, there is no remedy except to throw over the axiom of parallels and to work out a non-Euclidean geometry. Something similar is required to-day in economics. We need to throw over the second postulate of the classical doctrine and to work out the behaviour of a system in which involuntary unemployment in the strict sense is possible.
J M Keynes General Theory
People calling themselves ‘New Keynesians’ ought to be rather embarrassed by the fact that the kind of microfounded dynamic stochastic general equilibrium models they use, cannot incorporate such a basic fact of reality as involuntary unemployment!
Of course, working with microfunded representative agent models, this should come as no surprise. If one representative agent is employed, all representative agents are. The kind of unemployment that occurs is voluntary, since it is only adjustments of the hours of work that these optimizing agents make to maximize their utility. Maybe that’s also the reason prominent ‘New Keynesian’ macroeconomist Simon Wren-Lewis can write
I think the labour market is not central, which was what I was trying to say in my post. It matters in a [New Keynesian] model only in so far as it adds to any change to inflation, which matters only in so far as it influences central bank’s decisions on interest rates.
In the basic DSGE models used by most ‘New Keynesians’, the labour market is always cleared – responding to a changing interest rate, expected life time incomes, or real wages, the representative agent maximizes the utility function by varying her labour supply, money holding and consumption over time. Most importantly – if the real wage somehow deviates from its “equilibrium value,” the representative agent adjust her labour supply, so that when the real wage is higher than its “equilibrium value,” labour supply is increased, and when the real wage is below its “equilibrium value,” labour supply is decreased.
In this model world, unemployment is always an optimal choice to changes in the labour market conditions. Hence, unemployment is totally voluntary. To be unemployed is something one optimally chooses to be.
In a blogpost discussing ‘New Keynesian’ macroeconomics and the definition of neoclassical economics, Paul Krugman writes:
So, what is neoclassical economics? … I think we mean in practice economics based on maximization-with-equilibrium. We imagine an economy consisting of rational, self-interested players, and suppose that economic outcomes reflect a situation in which each player is doing the best he, she, or it can given the actions of all the other players …
Some economists really really believe that life is like this — and they have a significant impact on our discourse. But the rest of us are well aware that this is nothing but a metaphor; nonetheless, most of what I and many others do is sorta-kinda neoclassical because it takes the maximization-and-equilibrium world as a starting point or baseline, which is then modified — but not too much — in the direction of realism.
This is, not to put too fine a point on it, very much true of Keynesian economics as practiced … New Keynesian models are intertemporal maximization modified with sticky prices and a few other deviations …
Why do things this way? Simplicity and clarity. In the real world, people are fairly rational and more or less self-interested; the qualifiers are complicated to model, so it makes sense to see what you can learn by dropping them. And dynamics are hard, whereas looking at the presumed end state of a dynamic process — an equilibrium — may tell you much of what you want to know.
Being myself sorta-kinda Keynesian I find this analysis utterly unconvincing.
Maintaining that economics is a science in the “true knowledge” business, yours truly remains a skeptic of the pretences and aspirations of ‘New Keynesian’ macroeconomics. So far, I cannot really see that it has yielded very much in terms of realistic and relevant economic knowledge.
The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious underlabouring of its deeper philosophical and methodological foundations. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that ‘New Keynesians’ cannot give supportive evidence for their considering it fruitful to analyze macroeconomic structures and events as the aggregated result of optimizing representative actors. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that ‘New Keynesian’ macroeconomics on the whole has not delivered anything else than “as if” unreal and irrelevant models.
Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427]. We should look out for causal relations. But models can never be more than a starting point in that endeavour. There is always the possibility that there are other variables – of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible – that were not considered for the model.
The kinds of laws and relations that economics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in the real world they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existant. Unfortunately that also makes most of the achievements of econometrics – as most of contemporary endeavours of economic theoretical modeling – rather useless.
Economic policies cannot presuppose that what has worked before, will continue to do so in the future. That macroeconomic models could get hold of correlations between different “variables” is not enough. If they could not get at the causal structure that generated the data, they are not really “identified”. Dynamic stochastic general euilibrium (DSGE) macroeconomists – including ‘New Keynesians’ — have drawn the conclusion that the problem with unstable relations is to construct models with clear microfoundations where forward-looking optimizing individuals and robust, deep, behavioural parameters are seen to be stable even to changes in economic policies.
Here we are getting close to the heart of darkness in ‘New Keynesian’ macroeconomics. Where ‘New Keynesian’ economists think that they can rigorously deduce the aggregate effects of (representative) actors with their reductionist microfoundational methodology, they have to put a blind eye on the emergent properties that characterize all open social systems – including the economic system. The interaction between animal spirits, trust, confidence, institutions etc., cannot be deduced or reduced to a question answerable on the idividual level. Macroeconomic structures and phenomena have to be analyzed also on their own terms. And although one may easily agree with Krugman’s emphasis on simple models, the simplifications used may have to be simplifications adequate for macroeconomics and not those adequate for microeconomics.
‘New Keynesian’ macromodels describe imaginary worlds using a combination of formal sign systems such as mathematics and ordinary language. The descriptions made are extremely thin and to a large degree disconnected to the specific contexts of the targeted system than one (usually) wants to (partially) represent. This is not by chance. These closed formalistic-mathematical theories and models are constructed for the purpose of being able to deliver purportedly rigorous deductions that may somehow by be exportable to the target system. By analyzing a few causal factors in their “macroeconomic laboratories” they hope they can perform “thought experiments” and observe how these factors operate on their own and without impediments or confounders.
Unfortunately, this is not so. The reason for this is that economic causes never act in a socio-economic vacuum. Causes have to be set in a contextual structure to be able to operate. This structure has to take some form or other, but instead of incorporating structures that are true to the target system, the settings made in these macroeconomic models are rather based on formalistic mathematical tractability. In the models they appear as unrealistic assumptions, usually playing a decisive role in getting the deductive machinery deliver “precise” and “rigorous” results. This, of course, makes exporting to real world target systems problematic, since these models – as part of a deductivist covering-law tradition in economics – are thought to deliver general and far-reaching conclusions that are externally valid. But how can we be sure the lessons learned in these theories and models have external validity, when based on highly specific unrealistic assumptions? As a rule, the more specific and concrete the structures, the less generalizable the results. Admitting that we in principle can move from (partial) falsehoods in theories and models to truth in real world target systems does not take us very far, unless a thorough explication of the relation between theory, model and the real world target system is made. If models assume representative actors, rational expectations, market clearing and equilibrium, and we know that real people and markets cannot be expected to obey these assumptions, the warrants for supposing that conclusions or hypothesis of causally relevant mechanisms or regularities can be bridged, are obviously non-justifiable. To have a deductive warrant for things happening in a closed model is no guarantee for them being preserved when applied to the real world.
In microeconomics we know that aggregation really presupposes homothetic an identical preferences, something that almost never exist in real economies. The results given by these assumptions are therefore not robust and do not capture the underlying mechanisms at work in any real economy. And models that are critically based on particular and odd assumptions – and are neither robust nor congruent to real world economies – are of questionable value.
Even if economies naturally presuppose individuals, it does not follow that we can infer or explain macroeconomic phenomena solely from knowledge of these individuals. Macroeconomics is to a large extent emergent and cannot be reduced to a simple summation of micro-phenomena. Moreover, as we have already argued, even these microfoundations aren’t immutable. The “deep parameters” of ‘New Keynesian’ DSGE models – “tastes” and “technology” – are not really the bedrock of constancy that they believe (pretend) them to be.
So I cannot concur with Krugman — and other sorta-kinda ‘New Keynesians’ — when they try to reduce Keynesian economics to “intertemporal maximization modified with sticky prices and a few other deviations.”
The final court of appeal for macroeconomic models is the real world, and as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than “hand waving” that give us rather little warrant for making inductive inferences from models to real world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.
To Keynes this was self-evident. But obviously not so to ‘New Keynesians.’
Larry Summers is right — we certainly need “a new Keynesian economics that is more Keynesian and less new.”