Tuesday , November 5 2024
Home / Lars P. Syll / Macroeconomic forecasting

Macroeconomic forecasting

Summary:
Macroeconomic forecasts produced with macroeconomic models tend to be little better than intelligent guesswork. That is not an opinion – it is a fact. It is a fact because for decades many reputable and long standing model based forecasters have looked at their past errors, and that is what they find. It is also a fact because we can use models to generate standard errors for forecasts, as well as the most likely outcome that gets all the attention. Doing so indicates errors of a similar magnitude as those observed from past forecasts. In other words, model based forecasts are predictably bad … I think it is safe to say that this inability to accurately forecast is unlikely to change anytime soon. Which raises an obvious question: why do people still use often elaborate models to forecast? … It makes sense for both monetary and fiscal authorities to forecast. So why use the combination of a macroeconomic model and judgement to do so, rather than intelligent guesswork? (Intelligent guesswork here means some atheoretical time series forecasting technique.) The first point is that it is not obviously harmful to do so … Many other organisations, not directly involved in policy making, produce macro forecasts. Why do they bother? Why not just use the policy makers’ forecast? A large part of the answer must be that the media shows great interest in these forecasts.

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten

Peter Radford writes AJR, Nobel, and prompt engineering

Lars Pålsson Syll writes MMT explained

Lars Pålsson Syll writes Statens finanser funkar inte som du tror

Macroeconomic forecasts produced with macroeconomic models tend to be little better than intelligent guesswork. That is not an opinion – it is a fact. It is a fact because for decades many reputable and long standing model based forecasters have looked at their past errors, and that is what they find. It is also a fact because we can use models to generate standard errors for forecasts, as well as the most likely outcome that gets all the attention. Doing so indicates errors of a similar magnitude as those observed from past forecasts. In other words, model based forecasts are predictably bad …

Macroeconomic forecastingI think it is safe to say that this inability to accurately forecast is unlikely to change anytime soon. Which raises an obvious question: why do people still use often elaborate models to forecast? …

It makes sense for both monetary and fiscal authorities to forecast. So why use the combination of a macroeconomic model and judgement to do so, rather than intelligent guesswork? (Intelligent guesswork here means some atheoretical time series forecasting technique.) The first point is that it is not obviously harmful to do so …

Many other organisations, not directly involved in policy making, produce macro forecasts. Why do they bother? Why not just use the policy makers’ forecast? A large part of the answer must be that the media shows great interest in these forecasts. Why is this? I’m tempted to say it’s for the same reason as many people read daily horoscopes. However I think it’s worth adding that there is a small element of a conspiracy to deceive going on here too …

The rather boring truth is that it is entirely predictable that forecasters will miss major recessions, just as it is equally predictable that each time this happens we get hundreds of articles written asking what has gone wrong with macro forecasting. The answer is always the same – nothing. Macroeconomic model based forecasts are always bad, but probably no worse than intelligent guesses.

Simon Wren-Lewis

Hmm …

Strange. On the one hand Wren-Lewis says that “macroeconomic forecasts are always bad,” but, on the other hand, since they are “probably no worse than intelligent guesses” and anyway are “not obviously harmful,” we have no reason to complain.

But Wren-Lewis is wrong. These forecasting models and the organizations and persons around them do cost society billions of pounds, euros and dollars every year. And if they do not produce anything better than “intelligent guesswork,” I’m afraid most taxpayers would say that they are certainly not harmless at all!

Mainstream neoclassical economists often maintain – usually referring to the methodological individualism of Milton Friedman – that it doesn’t matter if the assumptions of the models they use are realistic or not. What matters is if the predictions are right or not. But, if so, then the only conclusion we can make is – throw away the garbage! Because, oh dear, oh dear, how wrong they have been!

When Simon Potter a couple of years ago analyzed the predictions that the Federal Reserve Bank of New York did on the development of real GDP and unemployment for the years 2007-2010, it turned out that the predictions were wrong with respectively 5.9% and 4.4% – which is equivalent to 6 millions of unemployed:

Economic forecasters never expect to predict precisely. One way of measuring the accuracy of their forecasts is against previous forecast errors. When judged by forecast error performance metrics from the macroeconomic quiescent period that many economists have labeled the Great Moderation, the New York Fed research staff forecasts, as well as most private sector forecasts for real activity before the Great Recession, look unusually far off the mark.

One source for such metrics is a paper by Reifschneider and Tulip (2007). They analyzed the forecast error performance of a range of public and private forecasters over 1986 to 2006 (that is, roughly the period that most economists associate with the Great Moderation in the United States).

On the basis of their analysis, one could have expected that an October 2007 forecast of real GDP growth for 2008 would be within 1.3 percentage points of the actual outcome 70 percent of the time. The New York Fed staff forecast at that time was for growth of 2.6 percent in 2008. Based on the forecast of 2.6 percent and the size of forecast errors over the Great Moderation period, one would have expected that 70 percent of the time, actual growth would be within the 1.3 to 3.9 percent range. The current estimate of actual growth in 2008 is -3.3 percent, indicating that our forecast was off by 5.9 percentage points.

Using a similar approach to Reifschneider and Tulip but including forecast errors for 2007, one would have expected that 70 percent of the time the unemployment rate in the fourth quarter of 2009 should have been within 0.7 percentage point of a forecast made in April 2008. The actual forecast error was 4.4 percentage points, equivalent to an unexpected increase of over 6 million in the number of unemployed workers. Under the erroneous assumption that the 70 percent projection error band was based on a normal distribution, this would have been a 6 standard deviation error, a very unlikely occurrence indeed.

In other words — the “rigorous” and “precise” macroeconomic mathematical-statistical forecasting models were wrong. And the rest of us have to pay.

Potter is not the only one who lately has criticized the forecasting business. John Mingers comes to essentially the same conclusion when scrutinizing it from a somewhat more theoretical angle:

It is clearly the case that experienced modellers could easily come up with significantly different models based on the same set of data thus undermining claims to researcher-independent objectivity. This has been demonstrated empirically by Magnus and Morgan (1999) who conducted an experiment in which an apprentice had to try to replicate the analysis of a dataset that might have been carried out by three different experts (Leamer, Sims, and Hendry) following their published guidance. In all cases the results were different from each other, and different from that which would have been produced by the expert, thus demonstrating the importance of tacit knowledge in statistical analysis.

Magnus and Morgan conducted a further experiment which involved eight expert teams, from different universities, analysing the same sets of data each using their own particular methodology. The data concerned the demand for food in the US and in the Netherlands and was based on a classic study by Tobin (1950) augmented with more recent data. The teams were asked to estimate the income elasticity of food demand and to forecast per capita food consumption. In terms of elasticities, the lowest estimates were around 0.38 whilst the highest were around 0.74 – clearly vastly different especially when remembering that these were based on the same sets of data. The forecasts were perhaps even more extreme – from a base of around 4000 in 1989 the lowest forecast for the year 2000 was 4130 while the highest was nearly 18000!

The empirical and theoretical evidence is clear. Predictions and forecasts are inherently difficult to make in a socio-economic domain where genuine uncertainty and unknown unknowns often rule the roost. The real processes that underly the time series that economists use to make their predictions and forecasts do not confirm with the assumptions made in the applied statistical and econometric models. Much less is a fortiori predictable than standardly — and uncritically — assumed. The forecasting models fail to a large extent because the kind of uncertainty that faces humans and societies actually makes the models strictly seen inapplicable. The future is inherently unknowable — and using statistics, econometrics, decision theory or game theory, does not in the least overcome this ontological fact. The economic future is not something that we normally can predict in advance. Better then to accept that as a rule “we simply do not know.”

So, to say that this counterproductive forecasting activity is harmless, simply isn’t true. Spending billions after billions of hard-earned money on an activity that is no better than “intelligent guesswork,” is doing harm to our economies.

A couple of years ago Lars E. O. Svensson — former deputy governor of the Swedish Riksbank — was able to show that the bank had conducted a monetary policy — based to a large extent on forecasts produced by its macroeconomic models — that led to far too high unemployment according to Svensson’s calculations. Unharmful? Hardly!

In New York State, Section 899 of the Code of Criminal Procedure provides that persons “Pretending to Forecast the Future” shall be considered disorderly under subdivision 3, Section 901 of the Code and liable to a fine of $250 and/or six months in prison. Although the law does not apply to “ecclesiastical bodies acting in good faith and without fees,” I’m not sure where that leaves macroeconomic model-builders and other forecasters …

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *