From Lars Syll So, certainly, both non-theorists and some theorists have little patience for research that displays mathematical ingenuity but has no value as social science. But defining this work exactly is impossible. This sort of work is like pornography quite simple to recognize when one sees it. Jeffrey Ely As researchers, we (mostly) want to try to understand and explain reality. How we do this differs between various disciplines and thought traditions. Creating a ‘map’ at a 1:1 scale of reality is of little help. We have to rely on some kind of ‘approximation’ to grasp an often inaccessible reality. Exactly how this approximation is done varies significantly between different research traditions. In mathematics, by simply defining things clearly and conveniently, some tricky
Topics:
Editor considers the following as important: Uncategorized
This could be interesting, too:
John Quiggin writes Trump’s dictatorship is a fait accompli
Peter Radford writes Election: Take Four
Merijn T. Knibbe writes Employment growth in Europe. Stark differences.
Merijn T. Knibbe writes In Greece, gross fixed investment still is at a pre-industrial level.
from Lars Syll
So, certainly, both non-theorists and some theorists have little patience for research that displays mathematical ingenuity but has no value as social science. But defining this work exactly is impossible. This sort of work is like pornography quite simple to recognize when one sees it.
As researchers, we (mostly) want to try to understand and explain reality. How we do this differs between various disciplines and thought traditions. Creating a ‘map’ at a 1:1 scale of reality is of little help. We have to rely on some kind of ‘approximation’ to grasp an often inaccessible reality. Exactly how this approximation is done varies significantly between different research traditions.
In mathematics, by simply defining things clearly and conveniently, some tricky questions are thought to be solvable.
I have no problem with solving problems in mathematics by ‘defining’ them away. In pure mathematics, you are always allowed to take an epistemological view on problems and ‘axiomatically’ decide that 0.999… is 1. But how about the real world? In that world, from an ontological point of view, 0.999… is never 1! Although mainstream economics seems to take for granted that their epistemology-based models rule the roost even in the real world, economists ought to do some ontological reflection when they apply their mathematical models to the real world.
In econometrics we often run into the ‘Cauchy logic’ — the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Just imagine there could be more worlds than the one we live in and the problem is ‘fixed.’
Accepting Haavelmo’s domain of probability theory and sample space of infinite populations — just as Fisher’s ‘hypothetical infinite population,’ of which the actual data are regarded as constituting a random sample”, von Mises’s ‘collective’ or Gibbs’s ‘ensemble’ — also implies that judgments are made based on observations that are actually never made!
Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It is — just as the Cauchy mathematical logic of ‘defining’ away problems — not tenable.
In social sciences — including economics — it is always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …
It is not entirely satisfactory to proceed in this elusive manner, especially not within the social sciences. Nevertheless, we often see the application of this strategy in economics. But problems arise when we confront the theories with reality.
Take for example game theory.
Game theory is, like mainstream economics in general, model-oriented. There are many reasons for this – the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc. Most mainstream economists and game theorists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.
But if models are to be relevant, we also have to argue that their precision and rigour still hold when they are applied to real-world situations. They often do not. When addressing real economies, the idealizations and abstractions necessary for the deductivist machinery to work simply do not hold. If the real world is fuzzy, vague and indeterminate, then why should our models build upon a desire to describe it as precise and predictable? Being told that the model is rigorous and amenable to ‘successive approximations’ to reality is of little avail, especially when the law-like (nomological) core assumptions are highly questionable and extremely difficult to test.
Many mainstream economists think that game theory is useful and can be applied to real life and give important and interesting results. That, however, is a rather unsubstantiated view. What game theory does is, strictly seen, nothing more than investigate the logic of behaviour among non-existant robot-imitations of humans. Knowing how those ‘rational fools’ play games does not help us to decide and act when interacting with real people. Knowing some game theory may actually make us behave in a way that hurts both ourselves and others. Decision-making and social interaction are always embedded in socio-cultural contexts. Not taking account of that, game theory will remain an analytical cul-de-sac that never will be able to come up with useful and relevant explanations.
Game theorists extensively exploit ‘rational choice’ assumptions in their explanations. That is probably also the reason why game theory has not been able to accommodate well-known anomalies in its theoretical framework. That should hardly come as a surprise to anyone. Game theory with its axiomatic view of individuals’ tastes, beliefs, and preferences, cannot accommodate very much of real-life behaviour. It is hard to find really compelling arguments in favour of us continuing down its barren paths since individuals obviously do not comply with, or are guided by game theory.
Applications of game theory have on the whole resulted in massive predictive failures. People simply do not act according to the theory. They do not know or possess the assumed probabilities, utilities, beliefs or information to calculate the different (‘subgame,’ ‘trembling hand perfect’) Nash equilibria. They may be reasonable and make use of their given cognitive faculties as well as they can, but they are obviously not those perfect and costless hyper-rational expected utility maximizing calculators game theory posits. And fortunately so. Being ‘reasonable’ makes them avoid all those made-up ‘rationality’ traps that game theory would have put them in if they had tried to act as consistent players in a game theoretical sense.
Game theorists can, of course, marginally modify their toolbox and fiddle with the auxiliary assumptions to get whatever outcome they want. But as long as the ‘rational choice’ core assumptions are left intact, it seems a pointless effort to hamper an already excessive deductive-axiomatic formalism. If you do believe in the real-world relevance of game theoretical ‘science fiction’ assumptions such as expected utility, ‘common knowledge,’ ‘backward induction,’ correct and consistent beliefs etc., etc., then adding things like ‘framing,’ ‘cognitive bias,’ and different kinds of heuristics, do not ‘solve’ any problem. If we want to construct a theory that can provide us with explanations of individual cognition, decisions, and social interaction, we have to look for something else.
How do we explain the lack of agreement between theory and reality? And should we really take all these unrealistic nonsense theories seriously?
The purported strength of New Classical macroeconomics is that it has firm anchorage in preference-based microeconomics, especially the decisions taken by inter-temporal utility maximizing “forward-loooking” individuals.
To some of us, however, this has come at too high a price. The almost quasi-religious insistence that macroeconomics has to have microfoundations — without ever presenting either ontological or epistemological justifications for this claim — has turned a blind eye to the weakness of the whole enterprise of trying to depict a complex economy based on an all-embracing representative actor equipped with superhuman knowledge, forecasting abilities and forward-looking rational expectations. It is as if — after having swallowed the sour grapes of the Sonnenschein-Mantel-Debreu-theorem — these economists want to resurrect the omniscient Walrasian auctioneer in the form of all-knowing representative actors equipped with rational expectations and assumed to somehow know the true structure of our model of the world.
Methodologically New Classical macroeconomists like Lucas and Sargent build their whole approach on the utopian idea that there are some ‘deep structural constants’ that never change in the economy. To most other economists it is self-evident that economic structures change over time. That was one of the main points in Keynes’ critique of Tinbergen’s econometrics. Economic parameters do not remain constant over long periods. If there is anything we know, it is that structural changes take place.
The Lucas-Sargent Holy Grail of a ‘true economic structure’ being constant even in the long run, is from a realist perspective simply ludicrous. That anyone outside of Chicago should take the Lucas-Sargent kind of stuff seriously is totally incomprehensible. As Robert Solow says:
Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon. Now, Bob Lucas and Tom Sargent like nothing better than to get drawn into technical discussions, because then you have tacitly gone along with their fundamental assumptions; your attention is attracted away from the basic weakness of the whole story. Since I find that fundamental framework ludicrous, I respond by treating it as ludicrous — that is, by laughing at it — so as not to fall into the trap of taking it seriously and passing on to matters of technique.
Here there is also an interesting question regarding the psychological mechanisms that cause researchers to devote themselves for decades to completely unrealistic ‘Glasperlenspiel’. Often, the driving forces seem to have more to do with aesthetics, status, power, or a kind of ‘religious’ conviction than with a sound grounding in reality. Building mathematical models, as all of us who have engaged in this know, gives a sense of control and provability. But only as long as we stay within the model world. As soon as our models are confronted with reality, we lose control and the sense of security that deductions within our constructed systems provide. We should certainly protect the freedom of research, but it is often dangerous for research when the focus on formal logical and axiomatic-deductive research strategies — as in economics — comes to dominate an entire scientific discipline.
Seen from a deductive-nomological perspective, typical economic models (M) usually consist of a theory (T) — a set of more or less general (typically universal) law-like hypotheses (H) — and a set of (typically spatiotemporal) auxiliary assumptions (A). The auxiliary assumptions give ‘boundary’ descriptions such that it is possible to deduce logically (meeting the standard of validity) a conclusion (explanandum) from the premises T & A. Using this kind of model game theorists are (portrayed as) trying to explain (predict) facts by subsuming them under T, given A. An obvious problem with the formal logical requirements of what counts as H is the often severely restricted reach of the ‘law.’ In the worst case, it may not apply to any real, empirical, relevant, situation at all. And if A is not true, then M does not really explain (although it may predict) at all. Deductive arguments should be sound – valid and with true premises – so that we are assured of having true conclusions. Constructing theoretical models assuming ‘common knowledge’ and ‘rational expectations,’ says nothing of situations where knowledge is ‘non-common’ and expectations are ‘non-rational.’
Building theories and models that are ‘true’ in their own very limited ‘idealized’ domain is of limited value if we cannot supply bridges to the real world. ‘Laws’ that only apply in specific ‘idealized’ circumstances — in ‘nomological machines’ — are not the stuff that real science is built of.
When confronted with the massive empirical refutations of almost all models they have set up, many game theorists react by saying that these refutations only hit A (the Lakatosian ‘protective belt’), and that by ‘successive approximations’ it is possible to make the models more readily testable and predictably accurate. Even if T & A1 do not have much empirical content if by successive approximation we reach, say, T & A25, we are to believe that we can finally reach robust and true predictions and explanations.
Hans Albert’s ‘Model Platonism’ critique shows that there is a strong tendency for modellers to use the method of successive approximations as a kind of ‘immunization,’ taking for granted that there can never be any faults with the theory. Explanatory and predictive failures hinge solely on auxiliary assumptions. That the kind of theories and models used by game theorists should all be held non-defeasibly corroborated, seems, however — to say the least — rather unwarranted.
Retreating into looking upon models and theories as some kind of ‘conceptual exploration,’ and giving up any hopes whatsoever of relating theories and models to the real world is pure defeatism. Instead of trying to bridge the gap between models and the world, they simply decide to look the other way.
To yours truly, this kind of scientific defeatism is equivalent to surrendering our search for understanding and explaining the world we live in. It cannot be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything about the world we live in — then why should we waste any of our precious time on them?
Surely, qualities such as precision, simplicity, elegance, consistency, and rigour are desirable, but at what cost?
Mainstream theoretical economics is still under the spell of the Bourbaki tradition in mathematics. Theoretical rigour is everything. Studying real-world economies and empirical corroboration/falsification of theories and models is nothing. Separating questions of logic and empirical validity may — of course — help economists to focus on producing rigorous and elegant mathematical theorems that people like Lucas and Sargent consider as “progress in economic thinking.” To most other people, not being concerned with empirical evidence and model validation is a sign of social science becoming totally useless and irrelevant. Economic theories building on known to be ridiculously artificial assumptions without an explicit relationship with the real world is a dead end. That’s probably also the reason why Neo-Walrasian general equilibrium analysis today (at least outside Chicago) is considered a total waste of time. In the trade-off between relevance and rigour, priority should always be on the former when it comes to social science. The only thing followers of the Bourbaki tradition within economics — like Karl Menger, John von Neumann, Gerard Debreu, Robert Lucas and Thomas Sargent — have given us are irrelevant model abstractions with no bridges to real-world economies. It’s difficult to find a more poignant example of a total waste of time in science.
Models may help us think through problems. But we should never forget that the formalism we use in our models is not self-evidently transportable to a largely unknown and uncertain reality. The tragedy with mainstream economic theory is that it thinks that the logic and mathematics used are sufficient for dealing with our real-world problems. They are not! Model deductions based on questionable assumptions can never be anything but pure exercises in hypothetical reasoning. And that kind of reasoning cannot establish the truth value of facts. Never has. Never will.
Sometimes we must acknowledge that the ability for theoretical abstraction and model-building, rather than being something positive, can become a burden. Especially when one ‘forgets’ that the theories, at some point during the approach to reality, must also be confronted with it. Social science theory without empirical testing is nothing but speculation. It often looks so impressive, but it is often nothing more than a house of cards based on utterly ridiculous unrealistic assumptions. Simply settling for constructing all sorts of Walt Disney worlds without a connection to the world we live in is not science. From a scientific point of view, this is certainly ‘risk-free,’ but at the cost of the evidential weight being non-existent. It is science fiction. In the world of science, there is only one place for these: the trash bin.
Unfortunately, the problems do not end there. Even among those who have realized the need for empirical testing of theories, there is often a tendency to consider only the type of knowledge based on measurable statistical data as relevant. Data of a more difficult-to-assess nature is left aside because it cannot be treated mathematically-statistically. Unfortunately, this means that much of the research that today produces ‘statistically significant’ results is, to say the least, incomplete.
Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — especially ‘as-if-random’ natural experiments and RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are, more or less, tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’
It is widely believed among mainstream economists that the scientific value of randomization — contrary to other methods — is more or less uncontroversial and that randomized experiments are free from bias. When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’experimental turn’ in economics. Strictly seen, randomization does not guarantee anything.
‘Ideally’ controlled experiments tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural, or quasi) experiments to different settings, populations, or target systems, is not easy. Causes deduced in an experimental setting still have to show that they come with an export warrant to the target population. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.
The almost religious belief with which its propagators — including ‘Nobel prize’ winners like Duflo, Banerjee and Kremer — portray it, cannot hide the fact that RCTs cannot be taken for granted to give generalizable results. That something works somewhere is no warranty for us to believe it to work for us here or that it works generally.
Leaning on an interventionist approach often means that instead of posing interesting questions on a social level, the focus is on individuals. Instead of asking about structural socio-economic factors behind, e.g., gender or racial discrimination, the focus is on the choices individuals make. Esther Duflo is a typical example of the dangers of this limiting approach. Duflo et consortes want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems ‘the way plumbers do.’ Yours truly is far from sure that is the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old-fashioned plumbing is needed. The big social and economic problems we face today are not going to be solved by plumbers performing interventions or manipulations in the form of RCTs.
The present RCT idolatry is dangerous. Believing randomization is the only way to achieve scientific validity blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.
Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour of the econometric modelling and the far-reaching assumptions they are built on are frequently simply not supported by data.
Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of economic models and theories building on unwarranted idealisations. So far, I can’t really see that these models have yielded very much in terms of realistic and relevant economic knowledge.
All empirical sciences use simplifying or unrealistic assumptions (abstractions) in their modelling activities. That is not the issue — as long as the assumptions made are not unrealistic in the wrong way or for the wrong reasons.
But models do not only face theory. They also have to look to the world. Being able to model a ‘credible world,’ a world that somehow could be considered real or similar to the real world, is not the same as investigating the real world. Even though all theories are false, since they simplify, they may still possibly serve our pursuit of truth. But then they cannot be unrealistic or false in any way. The falsehood or unrealisticness has to be qualified.
Explanation, understanding and prediction of real-world phenomena, relations and mechanisms therefore cannot be grounded on simpliciter assuming the invoked isolations to be warranted. If we cannot show that the mechanisms or causes we isolate and handle in our models are stable, in the sense that when we export them from our models to our target systems they do not change from one situation to another, then they only hold under ceteris paribus conditions and a fortiori are of limited value for our understanding, explanation and prediction of our real world target system.
The obvious ontological shortcoming of a basically epistemic — rather than ontological — approach, is that the use of idealisations tout court does not guarantee that the correspondence between model and target is interesting, relevant, revealing or somehow adequate in terms of mechanisms, causal powers, capacities or tendencies. No matter how many convoluted refinements of concepts are made in the model, if the ‘simplifying’ idealisations made do not result in models similar to reality in the appropriate respects (such as structure, isomorphism etc), the surrogate system becomes a substitute system that does not bridge to the world but rather misses its target.
Constructing economic models somehow seen as ‘successively approximating’ economic reality, is a rather unimpressive attempt at legitimising using fictitious idealisations for reasons more to do with model tractability than with a genuine interest in understanding and explaining features of real economies. Many of the model assumptions standardly made in mainstream economics are restrictive rather than harmless and could a fortiori anyway not in any sensible meaning be considered approximations at all.
If you — to ‘save’ your theory or model — have to invoke things that do not exist, well, then your theory or model is probably not adequate enough to give the causal explanations searched for.