Sunday , December 22 2024
Home / Real-World Economics Review / What Martin Sandbu gets wrong about neoclassical macro-economics

What Martin Sandbu gets wrong about neoclassical macro-economics

Summary:
Martin Sandbu is, in the Financial Times, wrong about neoclassical macroeconomic models. Let me explain by responding to his text, paragraph by paragraph. No links, I might add these later. What do macroeconomists actually do? Without an answer to that question, it is difficult to articulate what they might be doing wrong. The rebuilding macroeconomic theory project is useful also to non-economists — perhaps especially to them — because it takes the time to dwell on how macroeconomists do what they do, in order to argue what they must do better. Two points. Sandbu answers this question by discussing what some macroeconomists do. To be precise: what neoclassical macroeconomists do. Stock and flow consistent modelling is left out of the discussion. NBER business cycle analysis is left

Topics:
Merijn T. Knibbe considers the following as important:

This could be interesting, too:

Dean Baker writes Health insurance killing: Economics does have something to say

Lars Pålsson Syll writes Debunking mathematical economics

John Quiggin writes RBA policy is putting all our futures at risk

Merijn T. Knibbe writes ´Extra Unordinarily Persistent Large Otput Gaps´ (EU-PLOGs)

Martin Sandbu is, in the Financial Times, wrong about neoclassical macroeconomic models. Let me explain by responding to his text, paragraph by paragraph. No links, I might add these later.

What do macroeconomists actually do? Without an answer to that question, it is difficult to articulate what they might be doing wrong. The rebuilding macroeconomic theory project is useful also to non-economists — perhaps especially to them — because it takes the time to dwell on how macroeconomists do what they do, in order to argue what they must do better.

Two points.

  • Sandbu answers this question by discussing what some macroeconomists do. To be precise: what neoclassical macroeconomists do. Stock and flow consistent modelling is left out of the discussion. NBER business cycle analysis is left out of the discussion. The kind of models which people like Steve Keen develop are left out of the discussion. Flow of Funds analysis as performed at national banks is left out of the discussion. Reduced form models and, in fact, ‘VAR’ models are left out of his discussion, too. It is about a very limited part of what macro-economics is all about  

  • Sandbu also does not mention what neoclassical macro-economists do not do. They do and did not develop any kind of serious macro-measurement system. Their key variable, utility, is not measured (and not even defined in any kind of precise way). Other variables are either defined in a very sloppy way (it is often not clear if labor is in hours or persons) or inconsistent with as well economic theory as measurement. Two examples of this: the concept of utility enhancing consumption in the models as a rule lacks consumption of public goods and services. And ‘money’ is defined as ‘deposits’ while cash is not included; ‘the banks’ as a sector are supposed to ‘gather’ deposits while in reality deposits never leave the banking system.

Another way of saying this is that the history of changing macro is the history of macro methodology. As the project’s introductory article by Vines and Wills narrates, today’s troubled macroeconomics is the result of two intellectual revolutions. The crucial methodological move by Keynes in the 1930s was a theory of the economic system in which everything depends on everything else rather than the prior practice of simply juxtaposing separate analyses of the key markets (for labour, goods, and money). Without this move it was impossible even to conceptualise the notion of a shortfall in aggregate demand as a cause of unemployment. In the jargon, he moved from partial to general equilibrium.

One point. The history of macro-economics as a science (and hence not just of neoclassical macro-economics) is of course also the history of measuring the macro-economy. When one compares ‘The economic consequences of the Peace’ of Keynes (which is loaded with data but which lacks a coherent framework like the (not yet developed) Flow of Funds) with his ‘How to pay for the war’, written twenty years later and based upon coherent national accounting concepts, it is shows how much progress Keynes thinking had made. After the war, this monetary framework of statistics was extended in two ways. One, consistent with national accounting, was the development of the Flow of Funds. The other, consistent with the business cycle analysis research of the National Bureau of Economic Research, were the more precise, high frequency, measurements of money by Friedman and Schwarz and many others as well. All mayor central banks gather and use these Flow of Funds for their macro-economic analysis (and to measure the amount of money). Both monetary statistical models are incorporated in the monthly monetary press release of the ECB. As stated, neoclassical economists are in their ‘DSGE’ models not even able to introduce a valid definition of money. Let alone valid measurements. Money is still not really incorporated in the models. While the Flow of Funds, which are general in the sense that they actually show complete financial flows between a complete set of sectors (including credit and payables and receivables and international flows and whatever) and seem to be another universe compared with the models… Non-neoclassical macro models do use these data. Neoclassical macro models don’t. What a mess.

The crucial methodological move in the 1970s was to reject models that produced policy options that relied on “fooling” participants systematically about the economic consequences of given policy interventions. The new tenet, to satisfy the so-called Lucas critique, was that models must involve private decision makers doing the best they could according to their own interest in light of forward-looking expectations that should not be systematically wrong — and no less correct than the beliefs of policymakers themselves. In the jargon, this is the requirement that models have proper “microfoundations”.

One point. Stating that a model has ‘microfoundations’ does not mean that it is based upon micro behavior or micro measurement. The national accounts and the flor of funds are based upon such micro measurements – the models aren’t. ‘Microfounded’ in the models means that the entire economy is modelled as one person (think: Robinson Crusoë) which optimizes his utility present and utility yet to come, i.e. if he has to consume one ‘complex product’ (think: a coconut) or if he has to plant it). Unlike in real life, products do not change, the model is a-historical and hence not dynamic. The model does also not involve decision makers – it basically is about one person (hey, what female economist would ever have dreamed up a model consisting of one single person alone on an island…). Yes, at the moment institutional detail is added to the models, like the existence of other islands and maybe even a ‘Friday’ who accompanies Robinson. But it is simply not true that these models are based upon the interaction of a multitude of decision makers. An all-knowing individual is supposed to behave as one giant homo-economicus is the central tenet of these models (and who indeed has all the knowledge which there is to know about all coconuts present and all coconuts yet to come). Having only one person in the models of course means that there is no inequality, that money is basically not needed and that there is in fact no unemployment: the representative consumer simply works a bit more or a bit less, depending on the amounts of coconuts present and coconuts yet to come he wants to have.

This, too, had implications for what it was possible to conceptualise within the new standard models. For example, it entailed that there is a (single) natural rate of unemployment, and as Paul Krugman says in his contribution to the project, “basically everyone accepted the natural rate idea, abandoning the notion of a long-run trade-off between inflation and unemployment”. The implications for the proper purposes of fiscal and monetary policy are huge — and to a large extent negative, in that they limit the plausible good these can have for the economy. How tight these limits are depends on modelling choices within the overall “microfounded” framework.

One point: close reading the models it turns out that there is, in an etymological sense, a ‘1984’ aspect to them: unemployment is leisure. Lucas has often in an explicit way made this connection. Unemployment is a private choice. The statistics, however, wisely base their definition of unemployment (which in many European countries reached, based upon this very definition, levels of between 20 to 25% recently) as a situation which people actively try to escape. They want to change this situation. It is not leisure. It makes them, according to many studies, miserable. It stinks. And people do not get used to it. Which means that the implications of the model do not hold. This is not trivial. Not including public goods and services into the concept of consumption means that cutting expenditure on these goods and services does not lead to lower welfare – to the contrary. It frees labor and capital for private consumption. Stating that unemployment is leisure means that involuntary unemployment does not exist – based upon such ideas economists as well as the European Commission estimated rates of natural unemployment in countries like Spain which were about 25%… what a mess. But Sandbu is right: this shows that the implications of using such models to analyze policy are huge – and destructive.

There are two important truths to recognise here. One is general: methodology matters. It matters because it shapes, directs and to some degree constrains the answers economic research can produce to important questions of policy. It can even shape the sort of questions that can be profitably pursued in the first place. By “profitably” I have in mind what questions the methodological framework can fruitfully address, but also what questions researchers are rewarded for pursuing. In a scathing critique of the “microfoundations hegemony” Simon Wren-Lewis complains that “non-microfounded models [were] eventually excluded from the top journals”, with implications for the prestige, career advancement, and funding for researchers using other kinds of analysis.

I agree with Sandu. Methodology matters. Models which are conceptually at glaring odds with measurements should simply not be used. Such models malshape, maldirect and malconstrain real world policies. And indeed, as Buiter and Goodhart have argued, models which do not contain a financial sector make it, ahem, difficult to analyze this sector and its (destabilizing?) influence on the economy. Again: the national accounts and the Flow of Funds are, when it comes to monetary sectors, consistent, coherent and complete and do enable such analysis. The models are not able to do this. Sectors are missing, certain kinds of money are missing etcetera, etcetera. What a mess. It’s not macro at all.

The second important truth concerns specifics: the precise constraints the dominant methodology imposes, how serious a problem they are, and how they could or should be lifted. Here the rebuilding macro theory project is perhaps the most illuminating as it showcases wide disagreement among the contributors.

One point. As stated, the models are conceptually vague, fuzzy, incomplete, inconsistent and incoherent. They cannot be used for precise answers.

These differences go into the detailed modelling choices, and uninitiated but interested readers may benefit from a grasp of the overall structure of the standard model — because (macro) economics is too important to be left to (macro) economists only. At the core of macroeconomic research is the DSGE model, so-called because it is Dynamic (it describes an economy evolving over time and models decision makers as acting on expectations of the future), Stochastic (it includes random fluctuations or “shocks” to the economy) and General Equilibrium (everything depends on everything else).

One point. The models do not incorporate real data on the real world. The Eagle-Fli model of the ECB talks about deposits but does not really incorporate data on deposits. It talks about houses but does not incorporate data on the amount and the value of houses or about changes in ownership rates. It talks about consumption but excludes public goods and services. The shocks to the model are no real life shocks, the economy does not really evolve over time as it is assumed that it quickly returns to equilibrium (hence the idea that 25% unemployment in Spain in natural, one year is enough to solve such a problem in a general equilibrium setting…). And indeed, when expectations of the future indicate that the sea level will rise the representative consumer will plant coconut trees at another, higher, place. Do we?

For those who don’t do economic modelling but want to understand what it actually involves, the New York Fed has produced a guide to what a DSGE model is and how policymakers use it. The introductory article by Vines and Wills in the Rethinking Macroeconomics volume also gives a verbal overview of the state of the art model used by the European Central Bank, as well as the basic mathematical content. But in very crude terms, a DSGE model contains equations representing individuals’ choices to consume and save; companies’ choices to produce and invest; and the central bank’s setting of nominal interest rates. Upon this skeleton, modelling choices can be added about all kinds of things (how companies set prices; how work and hiring decisions are made; other policy choices and so on). The model is solved or estimated as a complicated system of many equations, often requiring simulations, simplifications or shortcuts to approximate a solution. Much work in macroeconomics lies in refining the basic model to make it more fit for purpose.

Whether this can be adequately done depends on what that purpose is; how far a “refinement” can stretch the structure; and what criteria modelling choices are and should be judged by. It is in these questions we are going to see the real heat of the intellectual conflict.

Again: the models are not about individuals. It is about the hive-mind of the representative consumer. There are no (none at all, nada, niets) specifics about how individual behavior and expectations aggregate. Institutional economists played a mayor, not to say decisive role in developing the (concepts of) macro-statistics. Despite the best efforts of people like Friedman and Schwarz, neoclassical economists did not bother to really master these concepts and statistics. Mind that Friedman and Schwarz, in their statistical work on the historical change of the amount of money, analyzed real life business cycles as identified by the NBER macro-economists and made a very careful distinction between minor and mayor shocks – a distinction which did not make it to the models. Introducing this distinction in the models would have required the modellers to incorporate a real financial sector and credit and debt in their models and the possibility of non-general equilibrium outcomes. They simply chose not to think about it and to assume a representative consumer which, alone on his island, does not need to borrow or lend. What a mess. All heat and no light. What a waste.

Merijn T. Knibbe
Economic historian, statistician, outdoor guide (coastal mudflats), father, teacher, blogger. Likes De Kift and El Greco. Favorite epoch 1890-1930.

Leave a Reply

Your email address will not be published. Required fields are marked *