Tuesday , November 5 2024
Home / Lars P. Syll / Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’

Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’

Summary:
Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’ This effort to create foundations for the probability approach in econometrics finally results in an inconsistent set of claims in its defence. First, there are vast amounts of experience which warrant a frequency interpretation. This is supported by repetitive discussions of experimental design, but the inability to experiment inspires an epistemological interpretation. Then Haavelmo mentions the futility of bothering with these issues because the probability approach is most of all a useful tool. This would be an instrumentalistic justification for its use if Haavelmo gave supportive evidence for his claim. There is not one example which attempts to do so … In practice

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes What statistics teachers get wrong!

Lars Pålsson Syll writes Statistical uncertainty

Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics

Lars Pålsson Syll writes Interpreting confidence intervals

Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’

This effort to create foundations for the probability approach in econometrics finally results in an inconsistent set of claims in its defence.

Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’First, there are vast amounts of experience which warrant a frequency interpretation. This is supported by repetitive discussions of experimental design, but the inability to experiment inspires an epistemological interpretation. Then Haavelmo mentions the futility of bothering with these issues because the probability approach is most of all a useful tool. This would be an instrumentalistic justification for its use if Haavelmo gave supportive evidence for his claim. There is not one example which attempts to do so …

In practice econometricians mostly have to do without experiments. From the early days on, they tried to provide stand-ins for real experimentation. First, it was argued that economic models themselves could substitute as alternatives to experiments. However, this was not intended so much as to justify probabilistic inference, but as, primarily, to level economics with the ‘experimental sciences. Then Haavelmo introduced his version of experimental design in order to make economic theories (statistically) operational. Again, this could not provide the justification for using the probabilistic framework. The founders of econometrics tried to adapt the sampling approach to a non-experimental small sample domain. They tried to justify this with a priori and analytical arguments. However, the ultimate argument for a ‘probability approach in econometrics’ consists of a mixture of metaphors, metaphysics and a pinch of bluff.

As always — if people do not give justifications for the validity of the assumptions they make, well then those assumptions are nothing but assumptions. Just talking about — as did Haavelmo —  “experimental design,” “hoping” that nature takes care of the necessary ceteris paribus conditions, and believing it “fruitful” to treat economic data as if they were a set of truly random data does not take us very far. Bridging from econometric and statistical models to real-world target systems demands much more. Most of all you have to — not just rhetorically — search for the real data-generating mechanisms.

The assumptions of imaginary “super-populations” or “hypothetical infinite populations” are some of the many dubious assumptions used in modern econometrics.

As social scientists — and economists — we have to confront the all-important question of how to handle uncertainty and randomness. Should we define randomness with probability? If we do, we have to accept that to speak of randomness we also have to presuppose the existence of nomological probability machines, since probabilities cannot be spoken of – and actually, to be strict, do not at all exist – without specifying such system-contexts. Accepting a domain of probability theory and sample space of infinite populations also implies that judgments are made on the basis of observations that are actually never made!

Infinitely repeated trials or samplings never take place in the real world. So that cannot be a sound inductive basis for a science with aspirations of explaining real-world socio-economic processes, structures or events. It’s not tenable.

And as if this wasn’t enough, one could, as argued, also seriously wonder what kind of “populations” these statistical and econometric models ultimately are based on. Why should we as social scientists — and not as pure mathematicians working with formal-axiomatic systems without the urge to confront our models with real target systems — unquestioningly accept models based on concepts like the “infinite superpopulations” used in e.g. the potential outcome framework that has become so popular lately in social sciences?

Of course, one could treat observational or experimental data as random samples from real populations. I have no problem with that. At least not as long as one remembers that usually, the random-sampling assumptions do not apply to the kind of ‘convenience samples’ that economic data standardly constitute and that the usual statistical inferences therefore can not be made.

However probabilistic econometrics does not content itself with that kind of population. Instead, it creates imaginary populations of “parallel universes” and assumes that our data are random samples from that kind of  “infinite superpopulations.”

Econometrics — a mixture of ‘metaphors, metaphysics, and a pinch of bluff’But this is actually nothing else but hand-waving! And it is inadequate for real science. In social sciences — including economics — it’s always wise to ponder C. S. Peirce’s remark that universes are not as common as peanuts …

Nowadays it has almost become a self-evident truism among economists that you cannot expect people to take your arguments seriously unless they are based on or backed up by advanced econometric modelling. So legions of mathematical-statistical theorems are proved — and heaps of fiction are being produced, masquerading as science. The rigour​ of the econometric modelling and the far-reaching assumptions they are built on are frequently not supported by data.

Modelling assumptions made in statistics and econometrics are more often than not made for mathematical tractability reasons, rather than verisimilitude. That is unfortunately also a reason why the methodological ‘rigour’ encountered when taking part in statistical and econometric research to a large degree is nothing but a deceptive appearance. The models constructed may seem technically advanced and very ‘sophisticated,’ but that’s usually only because the problems here discussed have been swept under the carpet. Assuming that our data are generated by ‘coin flips’ in an imaginary ‘superpopulation’ only means that we get answers to questions that we are not asking.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *