Sunday , December 22 2024
Home / Steve Keen's Debt Watch / The need for pluralism in economics

The need for pluralism in economics

Summary:
For decades, mainstream economists have reacted to criticism of their methodology mainly by dismissing it, rather than engaging with it. And the customary form that dismissal has taken is to argue that critics and purveyors of alternative approaches to economics simply aren’t capable of understanding the mathematics the mainstream uses. The latest instalment of this slant on non-mainstream economic theory appeared in Noah Smith’s column in Bloomberg View: “Economics Without Math Is Trendy, But It Doesn’t Add Up“. Figure 1: Noah’s tweet announcing his blog post While Noah’s column made some valid points (and there’s been some good off-line discussion between us too), its core message spouted five conflicting fallacies as fact: The first (proclaimed in the title supplied by the Bloomberg sub-editor rather than by Noah) is that non-mainstream (or “heterodox”) economics is not mathematical; The second is that the heterodox mathematical models that do exist can’t be used to make forecasts; The third, that there are some that can make forecasts, but these have so many parameters that they are easily “over-fitted” to existing data and therefore useless at prediction (and they lack sufficient attention to human behaviour); The fourth, that though heterodox economists make a song and dance about developing “stock-flow consistent” models, mainstream models are stock-flow

Topics:
Steve Keen considers the following as important:

This could be interesting, too:

Steve Keen writes A Simple Solution to the Banking Crisis That No Country Will Implement

Steve Keen writes How does JK Galbraith’s The New Industrial Estate hold up after 6 decades?

Steve Keen writes How does JK Galbraith’s The New Industrial Estate hold up, six decades on?

Steve Keen writes Redirecting to Patreon

For decades, mainstream economists have reacted to criticism of their methodology mainly by dismissing it, rather than engaging with it. And the customary form that dismissal has taken is to argue that critics and purveyors of alternative approaches to economics simply aren’t capable of understanding the mathematics the mainstream uses. The latest instalment of this slant on non-mainstream economic theory appeared in Noah Smith’s column in Bloomberg View: “Economics Without Math Is Trendy, But It Doesn’t Add Up“.

Figure 1: Noah’s tweet announcing his blog post

The need for pluralism in economics

While Noah’s column made some valid points (and there’s been some good off-line discussion between us too), its core message spouted five conflicting fallacies as fact:

  • The first (proclaimed in the title supplied by the Bloomberg sub-editor rather than by Noah) is that non-mainstream (or “heterodox”) economics is not mathematical;
  • The second is that the heterodox mathematical models that do exist can’t be used to make forecasts;
  • The third, that there are some that can make forecasts, but these have so many parameters that they are easily “over-fitted” to existing data and therefore useless at prediction (and they lack sufficient attention to human behaviour);
  • The fourth, that though heterodox economists make a song and dance about developing “stock-flow consistent” models, mainstream models are stock-flow consistent too; and
  • The fifth, that agent-based non-mainstream approaches haven’t produced decent results as yet, and may never do so.

I’ll consider each of these assertions one by one, because they certainly can’t be addressed together.

Non-mathematical?

There is indeed a wing of heterodox economics that is anti-mathematical. Known as “Critical Realism” and centred on the work of Tony Lawson at Cambridge UK, it attributes the failings of economics to the use of mathematics itself. Noah has been less than complimentary about this particular subset of heterodox economics in the past—see Figure 2.

Figure 2: Noah’s reaction to critical realism

The need for pluralism in economics

What Noah might not know is that many heterodox economists are critical of this approach as well. In response to a paper by Lawson that effectively defined “Neoclassical” economics as any economics that made use of mathematics (which would define me as a Neoclassical!), Jamie Morgan edited a book of replies to Lawson entitled What is Neoclassical Economics? (including a chapter by me). While the authors agreed with Lawson’s primary point that economics has suffered from favouring apparent mathematical elegance above realism, several of us asserted that mathematical analysis is needed in economics, if only for the reason that Noah gave in his article:

At the end of the day, policymakers and investors need to make quantitative decisions — how much to raise or lower interest rates, how big of a deficit to run, or how much wealth to allocate to Treasury bonds. (Noah Smith, August 8 2016)

The difference between mainstream and heterodox economists therefore isn’t primarily that the former is mathematical while the latter is verbal. It’s that heterodox mathematical economists accept Tony Lawson’s key point that mathematical models must be grounded in realism; we just reject, to varying degrees, Tony’s argument that mathematics inherently makes models unrealistic.

In contrast, the development of mainstream modelling has largely followed Milton Friedman’s edict that the realism of a model isn’t important—all that matters is that it generates realistic predictions:

Truly important and significant hypotheses will be found to have “assumptions” that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions (in this sense)… the relevant question to ask about the “assumptions” of a theory is not whether they are descriptively “realistic,” for they never are, but whether they are sufficiently good approximations for the purpose in hand. And this question can be answered only by seeing whether the theory works, which means whether it yields sufficiently accurate predictions. (Friedman 1966, The Methodology of Positive Economics; emphasis added)

Even on this criterion, mainstream macroeconomics is a failure, given the occurrence of a crisis that it believed could not happen. But this criterion alone isn’t sufficient: realism does matter.

If Friedman’s “only predictive accuracy matters, not realism” criterion had been applied in astronomy, we would still be using Ptolemy’s model that put the Earth at the centre of the Universe with the Sun, Moon, planets and stars orbiting it, because the model yielded quite accurate predictions of where celestial objects would appear to be in the sky centuries into the future. Its predictions were in fact more accurate than the initial predictions from Galileo’s heliocentric model, even though Galileo’s core concept—that the Sun was the centre of the solar system, not the Earth—was true, while Ptolemy’s Earth-centric paradigm was false.

Friedman’s argument was simply bad methodology, and it’s led to bad mainstream mathematical models that make screamingly unrealistic assumptions in order to reach desired results.

The pivotal unrealistic assumption of mainstream economics prior to the crisis was that “economic agents” have “rational expectations”. It sounds reasonable as a sound bite—who wants to be accused of having “irrational expectations”?—but it actually means assuming (a) that people have an accurate model of the economy in their heads that guides their behaviour today and (b) that this model happens to be the same as the one the Neoclassical author has dreamed up in his (it’s rarely her, on either side of economics) paper. And there are many, many other unrealistic assumptions.

Noah’s argument that heterodox economics is less mathematical than the mainstream was also truer some decades ago, but today, with so many physicists and mathematicians in the “heterodox” camp, it’s a very dated defence of the mainstream.

The standard riposte to critics of mainstream economics used to be that they are critical simply because they lack the mathematical skills to understand Neoclassical models, and—the argument Noah repeats here—their papers were just verbal hand-waving that couldn’t be given precise mathematical form, and therefore couldn’t be tested:

Also, vague ideas can’t easily be tested against data and rejected. The heart of science is throwing away models that don’t work. One of mainstream macro’s biggest failings is that theories that don’t fit the data continue to be regarded as good and useful models. But ideas like Minsky’s, with no equations or quantitative predictions, are almost impossible to reject — if they seem not to fit with events, they can simply be reinterpreted. People will forever argue about what Minsky meant, or John Maynard Keynes, or Friedrich Hayek. (Noah Smith, 8th August 2016)

“Ideas like Minsky’s, with no equations”? If it’s equations and Minsky you want, try this macroeconomics paper “Destabilizing a stable crisis: Employment persistence and government intervention in macroeconomics” (Costa Lima, Grasselli, Wang & Wu 2014). And I defy any Neoclassical to tell the authors (including mathematician Matheus Grasselli, whose PhD was entitled “Classical and Quantum Information Geometry“) that they lack the mathematical ability to understand Neoclassical models.

The mathematics used in heterodox papers like this one is in fact harder than that used by the mainstream, because it rejects a crucial “simplifying assumption” that mainstreamers routinely use to make their models easier to handle: imposing linearity on unstable nonlinear systems.

Imposing linearity on a nonlinear system is a valid procedure if, and only if, the equilibrium around which the model is linearized is stable. But the canonical model from which DSGE models were derived—Ramsey’s 1925 optimal savings model—has an unstable equilibrium that is similar to the shape of a horse’s saddle. Imagine trying to drop a ball onto a saddle so that it doesn’t slide off—impossible, no?

Not if you’re a “representative agent” with “rational expectations”! Neoclassical modelers assume that the “representative agents” in their models are in effect clever enough to be able to drop a ball onto the economic saddle and have it remain on it, rather than sliding off (they call it imposing a “transversality condition”).

The mathematically more valid approach is to accept that, if your model’s equilibria are unstable, then your model will display far-from-equilibrium dynamics, rather than oscillating about and converging on an equilibrium. This requires you to understand and apply techniques from complex systems analysis, which is much more sophisticated than the mathematics Neoclassical modelers use (see the wonderful free ChaosBook http://www.chaosbook.org/ for details). The upside of this effort though is that since the real world is nonlinear, you are much closer to capturing it with fundamentally nonlinear techniques than you are by pretending to model it as if it is linear.

Mainstream economists are belatedly becoming aware of this mistake, as Olivier Blanchard stated recently:

These techniques however made sense only under a vision in which economic fluctuations were regular enough so that, by looking at the past, people and firms (and the econometricians who apply statistics to economics) could understand their nature and form expectations of the future, and simple enough so that small shocks had small effects and a shock twice as big as another had twice the effect on economic activity…

Thinking about macroeconomics was largely shaped by those assumptions. We in the field did think of the economy as roughly linear, constantly subject to different shocks, constantly fluctuating, but naturally returning to its steady state over time. Instead of talking about fluctuations, we increasingly used the term “business cycle.” Even when we later developed techniques to deal with nonlinearities, this generally benign view of fluctuations remained dominant. (Blanchard September 2014, “Where Danger Lurks“).

This is not a problem for heterodox economists, since they don’t take as an article of faith that the economy is stable. But it does make it much more difficult to evaluate the properties of a model, fit it to data, and so on. For this, we need funding and time—not a casual dismissal of the current state of heterodox mathematical economics.

Can’t make forecasts?

Noah is largely correct that heterodox models aren’t set up to make numerical forecasts (though there are some that are). Instead the majority of heterodox models are set up to consider existing trends, and to assess how feasible it is that they can be maintained.

Far from being a weakness, this has been a strength of the heterodox approach: it enabled Wynne Godley to warn from as long ago as 1998 that the trends in the USA’s financial flows were unsustainable, and that therefore a crisis was inevitable unless these trends were reversed. At the same time that the mainstream was crowing about “The Great Moderation”, Wynne was warning that “Goldilocks is doomed“.

Wynne’s method was both essentially simple, and at the same time impossible for the mainstream to replicate, because it considered monetary flows between economic sectors—and the stocks of debt that these flows implied. The mainstream can’t do this, not because it’s impossible—it’s basically accounting—but because they have falsely persuaded themselves that money is neutral, and they therefore don’t consider the viability of monetary flows in their models.

Dividing the economy into the government, private and international sectors, Godley pointed out that the flows between them must sum to zero: an outflow from any one sector is an inflow to one of the others. Since the public sector under Clinton was running a surplus, and the trade balance was negative, the only way this could be sustained was for the private sector to “run a deficit”—to increase its debt to the banking sector. This implied unsustainable levels of private debt in the future, so that the trends that gave rise to “The Great Moderation” could not continue. As Wynne and Randall Wray put it:

It has been widely recognized that there are two black spots that blemish the appearance of our Goldilocks economy: low household saving (which has actually fallen below zero) and the burgeoning trade deficit. However, commentators have not noted so clearly that public sector surpluses and international current account deficits require domestic private sector deficits. Once this is understood, it will become clear that Goldilocks is doomed. (Godley & Wray, “Is Goldilocks Doomed?“, March 2000; emphasis added).

Had Wynne’s warnings been heeded, the 2008 crisis might have been averted. But of course they weren’t: instead mainstream economists generated numerical forecasts from their DSGE models that extrapolated “The Great Moderation” into the indefinite future. And in 2008, the US (and most of the global economy) crashed into the turning point that Wynne warned was inevitably coming.

Wynne’s wasn’t the only heterodox economist predicting a future crisis for the US and global economy, at a time when mainstream Neoclassical modellers were wrongly predicting continued economic bliss. Others following Hyman Minsky’s “Financial Instability Hypothesis” made similar warnings.

Noah was roughly right, but precisely wrong, when he claimed that “Minsky, though trained in math, chose not to use equations to model the economy — instead, he sketched broad ideas in plain English.”

In fact, Minsky began with a mathematical model of financial instability based on Samuelson’s multiplier-accelerator model (Minsky, 1957 “Monetary Systems and Accelerator Models” The American Economic Review, 47, pp. 860-883), but abandoned that for a verbal argument later (wonkish hint to Noah: abandoning this was a good idea, because Samuelson’s model is economically invalid. Transform it into a vector difference equation and you’ll see that its matrix is invertible).

Minsky subsequently attempted to express his model using Kalecki’s mathematical approach, but never quite got there. However, that didn’t stop others—including me—trying to find a way to express his ideas mathematically. I succeeded in August 1992 (with the paper being published in 1995), and the most remarkable thing about it—apart from the fact that it did generate a “Minsky Crisis”—was that the crisis was preceded by a period of apparent stability. That is in fact what transpired in the real world: the “Great Recession” was preceded by the “Great Moderation”. This was not a prediction of Minsky’s verbal model itself: it was the product of putting that verbal model in mathematical form, and then seeing how it behaved. It in effect predicted that, if a Minsky crisis were to occur, then it would be preceded by a period of diminishing cycles in inflation (with the wages share of output as a proxy for inflation) and unemployment.

The behaviour was so striking that I finished my paper noting it:

From the perspective of economic theory and policy, this vision of a capitalist economy with finance requires us to go beyond that habit of mind which Keynes described so well, the excessive reliance on the (stable) recent past as a guide to the future. The chaotic dynamics explored in this paper should warn us against accepting a period of relative tranquility in a capitalist economy as anything other than a lull before the storm. (Keen, S. 1995 “Finance and Economic Breakdown: Modeling Minsky’s ‘Financial Instability Hypothesis.’.” Journal of Post Keynesian Economics
17, p. 634).

This was before the so-called “Great Moderation” was apparent in the data, let alone before Neoclassical economists like Ben Bernanke popularised the term. This was therefore an “out of sample” prediction of my model—and of Minsky’s hypothesis. Had the 2008 crisis not been preceded by such a period, my model—and, to the extent that it captured its essence, Minsky’s hypothesis as well—would have been disproved. But the phenomenon that my Minsky model predicted as a precursor to crisis actually occurred—along with the crisis itself.

Even without quantitative predictions, these heterodox models—Godley’s stock-flow consistent projections and my nonlinear simulations—fared far better than did Neoclassical models with all their econometric bells and whistles.

Over-fitted to the data?

It is indeed true that many heterodox models have numerous parameters, and that a judicious choice of parameter values can enable a model to closely fit the existing data, but be useless for forecasting because it tracks the noise in the data, rather than the causal trends. Of course, this is equally true of mainstream models as well—compare for example the canonical Neoclassical DSGE paper “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach” (Smets and Wouters 2007) with the equally canonical Post Keynesian “Stock-Flow Consistent Modelling” (SFCM) paper “Fiscal Policy in a Stock-Flow Consistent (SFC) Model” from the same year (Godley and Lavoie 2007). Both are linear models, and the former has substantially more parameters than the latter.

The fact that this is a serious problem for DSGE models—and not a reason why they are superior to heterodox models—is clearly stated in a new note by Olivier Blanchard:

The models … come, however, with a very large number of parameters to estimate…, a number of parameters are set a priori, through “calibration.” This approach would be reasonable if these parameters were well established empirically or theoretically… But the list of parameters chosen through calibration is typically much larger, and the evidence often much fuzzier… In many cases, the choice to rely on a “standard set of parameters” is simply a way of shifting blame for the choice of parameters to previous researchers.” (Olivier Blanchard, “Do DSGE Models Have a Future?“, August 2016; emphasis added)

What really matters however, as a point of distinction between two approaches that share this same flaw, is not the flaw itself, but the different variables than the models regard as essential determinants of the economy’s behaviour. There we see chalk and cheese—with the heterodox choices being far more palatable because they include financial sector variables, whereas the pre-crisis DSGE models did not.

The real problems with over-fitting the data arise not from over-fitting to what ends up being noise rather than signal, but from fitting a model to real world data when the model omits crucial determinants of what actually happens in the real world, and from developing linear models of a fundamentally nonlinear real world. The former error guarantees that your carefully fitted model will match historical data superbly, but will be wildly wrong about the future because it omits key factors that determine it. The latter error guarantees that your model can extrapolate existing trends—if it includes the main determinants of the economy—but it cannot capture turning points. A linear model is, by definition, linear, and straight lines don’t bend.

On the “get the variables right” issue, most modern heterodox models are superior to mainstream DSGE ones, simply because most of them include the financial system and monetary stocks and flows in an intrinsic way.

On the linearity issue, most heterodox SFCM models are linear, and are therefore as flawed as their Neoclassical DSGE counterparts. But it then comes down to how are these linear models used? In the Neoclassical case, these models are used to make numerical forecasts and therefore they extrapolate existing trends into the future. In the heterodox case, they are used to ask whether existing trends can be sustained.

The former proclivity led DSGE modellers—such as the team behind the OECD’s Economic Outlook Report—to extrapolate the relative tranquillity of 1993-2007 into the indefinite future in June of 2007:

Recent developments have broadly confirmed this prognosis. Indeed, the current economic situation is in many ways better than what we have experienced in years. Against that background, we have stuck to the rebalancing scenario. Our central forecast remains indeed quite benign: a soft landing in the United States, a strong and sustained recovery in Europe, a solid trajectory in Japan and buoyant activity in China and India. In line with recent trends, sustained growth in OECD economies would be underpinned by strong job creation and falling unemployment. (Cotis 2007, p. 7; emphases added)

In contrast, Godley and Wray used the SFCM approach (without an actual model) to conclude that the 1993-2007 trends were unsustainable, and that without a change in policy, a crisis was inevitable:

We hasten to add that we do not believe this projection. The economy will not continue to grow; the projected budget surpluses will not be achieved; private sector spending will not continue to outstrip income; and growth of private sector indebtedness will not accelerate… As soon as private sector spending stops growing faster than private sector income, GDP will stop growing. (Godley & Wray, “Is Goldilocks Doomed?“, March 2000, p. 204)

This leads to Noah’s next false point, that Neoclassical models do what heterodox ones do anyway.

We’re doing it anyway?

There are three main strands in heterodox macro modelling: what is known as “Stock-Flow Consistent Modelling” (SFCM) that was pioneered by Wynne Goldey; nonlinear system dynamics modelling; and heterogeneous multi-agent modelling (there are other approaches too, including structurally estimated models and big data systems, but these are the main ones). Noah made a strong claim about the stock-flow consistent strand and Neoclassical modelling:

Some heterodox macroeconomists, it’s true, do have quantitative theories. One is “stock-flow consistent” models (a confusing name, since mainstream models also maintain consistency between stocks and flows). These models, developed mainly by researchers at the Levy Economics Institute of Bard College, are large systems of many equations, usually linear equations — for an example, see this paper by Levy economists Dimitri B. Papadimitriou, Gennaro Zezza and Michalis Nikiforos. (Noah Smith, 8th August 2016)

I agree the name is confusing—perhaps it would be better if the name were “Monetary Stock-Flow Consistent Models”. With that clarification, there is no way that Neoclassical DSGE models are stock-flow consistent in a monetary sense.

Even after the crisis, most Neoclassical DSGE models don’t include money or debt in any intrinsic way (the financial sector turns up as another source of “frictions” that slow down a convergence to equilibrium), and they certainly don’t treat the outstanding stock of private debt as a major factor in the economy.

Heterodox SFCM models do include these monetary and debt flows, and therefore the stocks as well. A trend—like that in the mid-1990s till 2000—that requires private debt to rise faster than GDP indefinitely will be identified as a problem for the economy by a heterodox SFCM model, but not by a Neoclassical DSGE one. A Neoclassical author who believes the fallacious Loanable Funds model of banking is also likely to wrongly conclude that the level of private debt is irrelevant (except perhaps during a crisis).

This makes heterodox SFCM models—such as the Kingston Financial Balances Model (KFBM) of the US economy produced by researchers at my own Department—very different to mainstream DSGE models.

No decent results from Agent-Based Models?

Noah concludes with the statement that what is known as “Agent Based Modelling” (ABM), which is very popular in heterodox circles right now, hasn’t yet produced robust results:

A second class of quantitative heterodox models, called “agent-based models,” have gained some attention, but so far no robust, reliable results have emerged from the research program. (Noah Smith, 8th August 2016)

Largely speaking, this is true—if you want to use these models for macroeconomic forecasting. But they are useful for illustrating an issue that the mainstream avoids: “emergent properties”. A population, even of very similar entities, can generate results that can’t be extrapolated from the properties of any one entity taken in isolation. My favourite example here is what we commonly call water. There is no such thing as a “water molecule”, or a “steam molecule”, let alone a “snowflake molecule”. All these peculiar and, to life, essential features of H2O, are “emergent properties” from the interaction of large numbers of H2O under different environmental conditions. None of these are properties of a single molecule of H2O taken in isolation.

Neoclassical economists unintentionally proved this about isolated consumers as well, in what is known as the Sonnenschein-Mantel-Debreu theorem. But they have sidestepped its results ever since.

The theorem establishes that even if an economy consists entirely of rational utility maximizers who each, taken in isolation, can be shown to have a downward-sloping individual demand curve, the market demand curve for any given market can theoretically take any polynomial shape at all:

Can an arbitrary continuous function … be an excess demand function for some commodity in a general equilibrium economy?… we prove that every polynomial … is an excess demand function for a specified commodity in some n commodity economy… every continuous real-valued function is approximately an excess demand function. (Sonnenschein, 1972 “Market Excess Demand Functions.” Econometrica
40, pp. 549-563.pp. 549-550)

Alan Kirman suggested the proper reaction to this discovery almost 30 years ago: that the decision by the Neoclassical school, at the time of the second great controversy over economic theory, to abandon class-based analysis, was unsound. Since even such a basic concept (to the Neoclassical school) as a downward-sloping demand curve could not be derived by extrapolating from the properties of an isolated individual, the only reasonable procedure was to work at the level of groups with “collectively coherent behaviour”—what the Classical School called “social classes”:

If we are to progress further we may well be forced to theorise in terms of groups who have collectively coherent behaviour. Thus demand and expenditure functions if they are to be set against reality must be defined at some reasonably high level of aggregation. The idea that we should start at the level of the isolated individual is one which we may well have to abandon. There is no more misleading description in modern economics than the so-called microfoundations of macroeconomics which in fact describe the behaviour of the consumption or production sector by the behaviour of one individual or firm. If we aggregate over several individuals, such a model is unjustified. Kirman, A. (1989). “The Intrinsic Limits of Modern Economic Theory: The Emperor Has No Clothes.” Economic Journal 99(395): 126-139.

Instead of taking this sensible route, Neoclassical economists—mainly without consciously realising it—took the approach of making the absurd assumption that the entire economy could be treated as a single individual in the fiction of a “representative agent”.

Mendacious textbooks played a large role here—which is why I say that they did this without realising that they were doing so. Most of today’s influential Neoclassical economists would have learnt their advanced micro from Hal Varian’s textbook. Here’s how Varian “explained” the Sonnenschein Mantel Debreu results:

“it is sometimes convenient to think of the aggregate demand as the demand of some ‘representative consumer’… The conditions under which this can be done are rather stringent, but a discussion of this issue is beyond the scope of this book…” (Varian 1984, p. 268)

The “convenience” of the “representative consumer” led directly to Real Business Cycle models of the macroeconomy, and thence DSGE—which Neoclassicals are now beginning to realise was a monumental blunder.

Multi-agent modelling may not lead to a new policy-oriented theory of macroeconomics. But it acquaints those who do it with the phenomenon of emergent properties—that an aggregate does not function as a scaled-up version of the entities that comprise it. That’s a lesson that Neoclassical economists still haven’t absorbed.

Previous periods of crisis in economic theory

Since I began this post by calling the current debate the “5th great conflict over the nature of economics”, I’d better detail the first three (the 4th being Keynes’s batlle in the 1930s). These were:

  • The “Methodenstreit” dispute between the Austrian and German Historical Schools—which was a dispute about a priori reasoning versus empirical data;
  • The Neoclassical revolt against the Classical school after Marx had turned the latter into the basis for a critique of capitalism, rather than a defence of it as it was with Smith and Ricardo; and
  • The event that I personally identify as the real point at which economics went wrong: Smith’s replacement of “the division of labour” as the source of rising productivity in capitalist over the Physiocratic argument that human productivity actually emanated from employing the energy of the Sun. Though the Physiocrats were wrong that agriculture was the only “productive” sector—manufacturing being “sterile” according to them since all it did was transform the outputs of agriculture into different forms, when in fact it harnesses “free energy” (energy from the Sun, fossil fuels and nuclear processes) to do useful work even more effectively than agriculture—they were right that harnessing free energy was the basis of the productivity of capitalism.

I’ll address this very last wonkish issue in a future post.

Steve Keen
Steve Keen (born 28 March 1953) is an Australian-born, British-based economist and author. He considers himself a post-Keynesian, criticising neoclassical economics as inconsistent, unscientific and empirically unsupported. The major influences on Keen's thinking about economics include John Maynard Keynes, Karl Marx, Hyman Minsky, Piero Sraffa, Augusto Graziani, Joseph Alois Schumpeter, Thorstein Veblen, and François Quesnay.

Leave a Reply

Your email address will not be published. Required fields are marked *