Post Keynesian approaches to uncertainty Two Post-Keynesian approaches to the nature and source of uncertainty and irreducible uncertainty have been considered, each with significantly different foundations and conclusions. The Human Abilities/Characteristics approach, related to epistemological uncertainty, has its primary foundations in human ignorance and inability; in Keynes’s case, the foundations lie in his logical interpretation of probability and his economic theorising from 1936 onwards. The Ergodic/Non-ergodic approach, related to ontological uncertainty, has its primary foundations in an ontological theory of the investigated world based on the presence or absence of ergodicity, as well as in Knight’s and Shackle’s reflections on probability, risk and uncertainty. For some purposes, these differences may not matter. All of Keynes’s revolutionary contributions that rely on irreducible uncertainty are (apparently) unaffected – the principle of effective demand, suboptimal equilibria, non-neutral money, liquidity preference, state policy action and so on. And in relation to Post-Keynesian model building, irreducible uncertainty can still enter as an exogenous factor influencing a range of variables, with its deeper origins being of no particular relevance to such exercises. For other purposes, however, the differences do matter.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Klas Eklunds ‘Vår ekonomi’ — lärobok med stora brister
Lars Pålsson Syll writes Ekonomisk politik och finanspolitiska ramverk
Lars Pålsson Syll writes NAIRU — a harmful fairy tale
Lars Pålsson Syll writes Isabella Weber on sellers inflation
Post Keynesian approaches to uncertainty
Two Post-Keynesian approaches to the nature and source of uncertainty and irreducible uncertainty have been considered, each with significantly different foundations and conclusions. The Human Abilities/Characteristics approach, related to epistemological uncertainty, has its primary foundations in human ignorance and inability; in Keynes’s case, the foundations lie in his logical interpretation of probability and his economic theorising from 1936 onwards. The Ergodic/Non-ergodic approach, related to ontological uncertainty, has its primary foundations in an ontological theory of the investigated world based on the presence or absence of ergodicity, as well as in Knight’s and Shackle’s reflections on probability, risk and uncertainty.
For some purposes, these differences may not matter. All of Keynes’s revolutionary contributions that rely on irreducible uncertainty are (apparently) unaffected – the principle of effective demand, suboptimal equilibria, non-neutral money, liquidity preference, state policy action and so on. And in relation to Post-Keynesian model building, irreducible uncertainty can still enter as an exogenous factor influencing a range of variables, with its deeper origins being of no particular relevance to such exercises.
For other purposes, however, the differences do matter. One reason concerns the history of ideas or, more narrowly, of economic thought, where the origins, nature and development of concepts are of central importance. Paramount here are the contributions and conceptual frameworks of Keynes, along with those of Knight, Shackle and the ergodicity theorists. A second is the need to have reasonably good understandings of these approaches before sensibly probing and evaluating them in relation to the further development of Post-Keynesianism – should one continue with more or less unchanged conceptions, modify these conceptions, or radically re-theorise them with a new approach? … A third relates to clarity of thought – unambiguous and reasonable meanings of terms (knowledge and uncertainty, for example), and the avoidance of any oversimplified dichotomies, are important in discussion and theorising. Finally, there are questions concerning deeper foundations, which relate not only to a range of methodological, philosophical and interdisciplinary issues, but also to the characterisation of a school of thought and its differences from other schools. In summarising Post-Keynesianism, explaining its key tenets, debating with orthodoxy, and even arguing over its own membership, Post-Keynesians need a set of adequate foundational concepts in which to theorise and discuss it.
The financial crisis of 2007-08 hit most laymen and economists with surprise. What was it that went wrong with our macroeconomic models, since they obviously did not foresee the collapse or even make it conceivable?
There are many who have ventured to answer this question. And they have come up with a variety of answers, ranging from the exaggerated mathematization of economics, to irrational and corrupt politicians.
But the root of our problem goes much deeper. It ultimately goes back to how we look upon the data we are handling. In “modern” macroeconomics — Dynamic Stochastic General Equilibrium, New Synthesis, New Classical and New ‘Keynesian’ — variables are treated as if drawn from a known “data-generating process” that unfolds over time and on which we therefore have access to heaps of historical time-series. If we do not assume that we know the “data-generating process” – if we do not have the “true” model – the whole edifice collapses. And of course it has to. I mean, who really honestly believes that we should have access to this mythical Holy Grail, the data-generating process?
“Modern” macroeconomics obviously did not anticipate the enormity of the problems that unregulated “efficient” financial markets created. Why? Because it builds on the myth of us knowing the “data-generating process” and that we can describe the variables of our evolving economies as drawn from an urn containing stochastic probability functions with known means and variances.
This is like saying that you are going on a holiday-trip and that you know that the chance the weather being sunny is at least 30%, and that this is enough for you to decide on bringing along your sunglasses or not. You are supposed to be able to calculate the expected utility based on the given probability of sunny weather and make a simple decision of either-or. Uncertainty is reduced to risk.
But as Keynes convincingly argued in his monumental Treatise on Probability (1921), this is not always possible. Often we simply do not know. According to one model the chance of sunny weather is perhaps somewhere around 10% and according to another – equally good – model the chance is perhaps somewhere around 40%. We cannot put exact numbers on these assessments. We cannot calculate means and variances. There are no given probability distributions that we can appeal to.
In the end this is what it all boils down to. We all know that many activities, relations, processes and events are of the Keynesian uncertainty-type. The data do not unequivocally single out one decision as the only “rational” one. Neither the economist, nor the deciding individual, can fully pre-specify how people will decide when facing uncertainties and ambiguities that are ontological facts of the way the world works.
Some macroeconomists, however, still want to be able to use their hammer. So they decide to pretend that the world looks like a nail, and pretend that uncertainty can be reduced to risk. So they construct their mathematical models on that assumption. The result: financial crises and economic havoc.
How much better – how much bigger chance that we do not lull us into the comforting thought that we know everything and that everything is measurable and we have everything under control – if instead we could just admit that we often simply do not know, and that we have to live with that uncertainty as well as it goes.
Fooling people into believing that one can cope with an unknown economic future in a way similar to playing at the roulette wheels, is a sure recipe for only one thing — economic disaster.