Wednesday , April 24 2024
Home / Lars P. Syll / Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish)

Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish)

Summary:
Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish) Mainstream economists often hold the view that criticisms of econometrics are the conclusions of sadly misinformed and misguided people who dislike and do not understand much of it. This is a gross misapprehension. To be careful and cautious is not equivalent to dislike. The ordinary deductivist ‘textbook approach’ to econometrics views the modelling process as foremost an estimation problem since one (at least implicitly) assumes that the model provided by economic theory is a well-specified and ‘true’ model. The more empiricist, general-to-specific-methodology (often identified as the ‘LSE approach’) on the other hand views models as theoretically and empirically

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Applied econometrics — a messy business

Lars Pålsson Syll writes Feynman’s trick (student stuff)

Lars Pålsson Syll writes Difference in Differences (student stuff)

Lars Pålsson Syll writes Vad ALLA bör veta om statistik

Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish)

Mainstream economists often hold the view that criticisms of econometrics are the conclusions of sadly misinformed and misguided people who dislike and do not understand much of it. This is a gross misapprehension. To be careful and cautious is not equivalent to dislike.

Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish)The ordinary deductivist ‘textbook approach’ to econometrics views the modelling process as foremost an estimation problem since one (at least implicitly) assumes that the model provided by economic theory is a well-specified and ‘true’ model. The more empiricist, general-to-specific-methodology (often identified as the ‘LSE approach’) on the other hand views models as theoretically and empirically adequate representations (approximations) of a data generating process (DGP). Diagnostics tests (mostly some variant of the F-test) are used to ensure that the models are ‘true’ – or at least ‘congruent’ – representations of the DGP. The modelling process is here more seen as a specification problem where poor diagnostics results may indicate a possible misspecification requiring re-specification of the model. The objective is standardly to identify models that are structurally stable and valid across a large time-space horizon. The DGP is not seen as something we already know, but rather something we discover in the process of modelling it. Considerable effort is put into testing to what extent the models are structurally stable and generalizable over space and time.

Although I have sympathy for this approach in general, there are still some unsolved ‘problematics’ with its epistemological and ontological presuppositions. There is, e. g., an implicit assumption that the DGP fundamentally has an invariant property and that models that are structurally unstable just have not been able to get hold of that invariance. But, as already Keynes maintained, one cannot just presuppose or take for granted that kind of invariance. It has to be argued and justified. Grounds have to be given for viewing reality as satisfying conditions of model-closure. It is as if the lack of closure that shows up in the form of structurally unstable models somehow could be solved by searching for more autonomous and invariable ‘atomic uniformity.’ But if reality is ‘congruent’ to this analytical prerequisite has to be argued for, and not simply taken for granted.

Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish)A great many models are compatible with what we know in economics — that is to say, do not violate any matters on which economists are agreed. Attractive as this view is, it fails to draw a necessary distinction between what is assumed and what is merely proposed as hypothesis. This distinction is forced upon us by an obvious but neglected fact of statistical theory: the matters ‘assumed’ are put wholly beyond test, and the entire edifice of conclusions (e.g., about identifiability, optimum properties of the estimates, their sampling distributions, etc.) depends absolutely on the validity of these assumptions. The great merit of modern statistical inference is that it makes exact and efficient use of what we know about reality to forge new tools of discovery, but it teaches us painfully little about the efficacy of these tools when their basis of assumptions is not satisfied. 

Millard Hastay

Even granted that closures come in degrees, we should not compromise on ontology. Some methods simply introduce improper closures, closures that make the disjuncture between models and real-world target systems inappropriately large. ‘Garbage in, garbage out.’

Underlying the search for these immutable ‘fundamentals’ lays the implicit view of the world as consisting of entities with their own separate and invariable effects. These entities are thought of as being able to be treated as separate and addible causes, thereby making it possible to infer complex interaction from a knowledge of individual constituents with limited independent variety. But, again, if this is a justified analytical procedure cannot be answered without confronting it with the nature of the objects the models are supposed to describe, explain or predict. Keynes himself thought it generally inappropriate to apply the ‘atomic hypothesis’ to such an open and ‘organic entity’ as the real world. As far as I can see these are still appropriate strictures all econometric approaches have to face. Grounds for believing otherwise have to be provided by the econometricians.


Trygve Haavelmo, the father of modern probabilistic econometrics, wrote that he and other econometricians could not build a complete bridge between our models and reality by logical operations alone, but finally had to make “a non-logical jump” [1943:15]. A part of that jump consisted in that econometricians “like to believe … that the various a priori possible sequences would somehow cluster around some typical time shapes, which if we knew them, could be used for prediction” [1943:16]. But since we do not know the true distribution, one has to look for the mechanisms (processes) that “might rule the data” and that hopefully persist so that predictions may be made. Of possible hypotheses on different time sequences (“samples” in Haavelmo’s somewhat idiosyncratic vocabulary) most had to be ruled out a priori “by economic theory”, although “one shall always remain in doubt as to the possibility of some … outside hypothesis being the true one” [1943:18].

[Haavelmo’s] effort to create foundations for the probability approach in econometrics finally results in an inconsistent set of claims in its defence. First, there are vast amounts of experience which warrant a frequency interpretation. This is supported by repetitive discussions of experimental design, but the inability to experiment inspires an epistemological interpretation.Haavelmo and modern probabilistic econometrics — a critical-realist perspective (wonkish) Then Haavelmo mentions the futility of bothering with these issues because the probability approach is most of all a useful tool. This would be an instrumentalistic justification for its use if Haavelmo gave supportive evidence for his claim. There is not one example which attempts to do so …

The founders of econometrics tried to adapt the sampling approach to a non-experimental small sample domain. They tried to justify this with a priori and analytical arguments. However, the ultimate argument for a ‘probability approach in econometrics’ consists of a mixture of metaphors, metaphysics and a pinch of bluff.

To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of DGP, but rather instrumentally conceived “as if”-constructs. Their ‘intrinsic closure’ is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the ‘extrinsic closure’ by hoping that the ‘practically decisive’ explanatory variables are relatively few, so that one may proceed “as if … natural limitations of the number of relevant factors exist” [Haavelmo 1944:29].

Haavelmo seems to believe that persistence and autonomy can only be found at the level of the individual, since individual agents are seen as the ultimate determinants of the variables in the economic system.

But why the ‘logically conceivable’ really should turn out to be the case is difficult to see. At least if we are not satisfied with sheer hope. As we have already noted Keynes reacted against using unargued for and unjustified assumptions of complex structures in an open system being reducible to those of individuals. In real economies, it is unlikely that we find many ‘autonomous’ relations and events. And one could of course, with Keynes and from a critical realist point of view, also raise the objection that to invoke a probabilistic approach to econometrics presupposes, e. g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot – just as little as pure hope – build a firm bridge between model and reality. Treating phenomena as if they essentially were stochastic processes is not the same as showing that they essentially are stochastic processes. As Hicks [1979:120-21] so neatly puts it:

Things become more difficult when we turn to time-series … The econometrist, who works in that field, may claim that he is not treading on very shaky ground. But if one asks him what he is really doing, he will not find it easy, even here, to give a convincing answer … [H]e must be treating the observations known to him as a sample of a larger “population”; but what population? … I am bold enough to conclude, from these considerations that the usefulness of “statistical” or “stochastic” methods in economics is a good deal less than is now conventionally supposed. We have no business to turn to them automatically; we should always ask ourselves, before we apply them, whether they are appropriate to the problem in hand.”

Rigour and elegance in the analysis do not make up for the gap between reality and model. It is the distribution of the phenomena in itself and not its estimation that ought to be at the centre of the stage. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo 1943:13]. In most cases, no such arguments are given.

You are, of course, entitled – like Haavelmo and his modern probabilistic followers – to express a hope “at a metaphysical level” that there are invariant features of reality to uncover and that also show up at the empirical level of observations as some kind of regularities. But is it a justifiable hope? I have serious doubts. The kind of regularities you may hope to find in society is not to be found in the domain of surface phenomena, but rather at the level of causal mechanisms, powers and capacities. Persistence and generality have to be looked out for at an underlying deep level. Most econometricians do not want to visit that playground. They are content with setting up theoretical models that give us correlations and eventually ‘mimic’ existing causal properties.

We have to accept that reality has no ‘correct’ representation in an economic or econometric model. There is no such thing as a ‘true’ model that can capture an open, complex and contextual system in a set of equations with parameters stable over space and time, and exhibiting invariant regularities. To just ‘believe,’ hope,’ or ‘assume’ that such a model possibly could exist is not enough. It has to be justified in relation to the ontological conditions of social reality.

In contrast to those who want to give up on (fallible, transient and transformable) ‘truth’ as a relation between theory and reality and content themselves with “truth” as a relation between a model and a probability distribution, I think it is better to really scrutinize if this latter attitude is feasible. To abandon the quest for truth and replace it with sheer positivism would indeed be a sad fate of econometrics. It is more rewarding to stick to truth as a regulatory ideal and keep on searching for theories and models that in relevant and adequate ways express those parts of reality we want to describe and explain.

Econometrics may be an informative tool for research. But if its practitioners do not investigate and make an effort of providing a justification for the credibility of the assumptions on which they erect their building, it will not fulfil its tasks. There is a gap between its aspirations and its accomplishments, and without more supportive evidence to substantiate its claims, critics will continue to consider its ultimate argument as a mixture of rather unhelpful metaphors and metaphysics. Maintaining that economics is a science in the ‘true knowledge’ business, yours truly remains a sceptic of the pretences and aspirations of econometrics. So far, I cannot really see that it has yielded very much in terms of relevant, interesting economic knowledge.

The marginal return on its ever higher technical sophistication in no way makes up for the lack of serious under-labouring of its deeper philosophical and methodological foundations that already Keynes complained about. The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population. After having analyzed some of its ontological and epistemological foundations, I cannot but conclude that econometrics, on the whole, has not delivered ‘truth.’ And I doubt if it has ever been the intention of its main protagonists.

Our admiration for technical virtuosity should not blind us to the fact that we have to have a more cautious attitude towards probabilistic inference of causality in economic contexts. Science should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts” [Keynes 1971-89 vol XVII:427].  We should look out for causal relations, but econometrics can never be more than a starting point in that endeavour since econometric (statistical) explanations are not explanations in terms of mechanisms, powers, capacities or causes. Firmly stuck in an empiricist tradition, econometrics is only concerned with the measurable aspects of reality. But there is always the possibility that there are other ‘variables’ — of vital importance and although perhaps unobservable and non-additive not necessarily epistemologically inaccessible — that were not considered for the model.

A rigorous application of econometric methods in economics really presupposes that the phenomena of our real-world economies are ruled by stable causal relations between variables. A perusal of the leading econom(etr)ic journals shows that most econometricians still concentrate on fixed parameter models and that parameter-values estimated in specific spatio-temporal contexts are presupposed to be exportable to totally different contexts. To warrant this assumption one, however, has to convincingly establish that the targeted acting causes are stable and invariant so that they maintain their parametric status after the bridging. The endemic lack of predictive success of the econometric project indicates that this hope of finding fixed parameters is a hope for which there really is no other ground than hope itself.

This is a more fundamental and radical problem than the celebrated ‘Lucas critique’ have suggested. This is not the question if deep parameters, absent on the macro-level, exist in ‘tastes’ and ‘technology’ on the micro-level. It goes deeper. Real-world social systems are not governed by stable causal mechanisms or capacities. It is the criticism that Keynes [1951(1926): 232-33] first launched against econometrics and inferential statistics already in the 1920s:

The atomic hypothesis which has worked so splendidly in Physics breaks down in Psychics. We are faced at every turn with the problems of Organic Unity, of Discreteness, of Discontinuity – the whole is not equal to the sum of the parts, comparisons of quantity fails us, small changes produce large effects, the assumptions of a uniform and homogeneous continuum are not satisfied. Thus the results of Mathematical Psychics turn out to be derivative, not fundamental, indexes, not measurements, first approximations at the best; and fallible indexes, dubious approximations at that, with much doubt added as to what, if anything, they are indexes or approximations of.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made “nomological machines” they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics – as most of the contemporary endeavours of economic theoretical modelling – rather useless.

References

Haavelmo, Trygve (1943), Statistical testing of business-cycle theories. The Review of  Economics and Statistics 25:13-18.

– (1944), The probability approach in econometrics. Supplement to Econometrica 12:1-115.

Hicks, John (1979), Causality in Economics. London: Basil Blackwell.

Keynes, John Maynard (1951 (1926)), Essays in Biography. London: Rupert Hart-Davis.

– (1971-89) The Collected Writings of John Maynard Keynes, vol. I-XXX. D E Moggridge & E A G Robinson (eds), London: Macmillan.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *