Wednesday , February 21 2024
Home / Real-World Economics Review / Freedman’s Rabbit Theorem

Freedman’s Rabbit Theorem

Summary:
From Lars Syll In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great, but as renowned statistician David Freedman had it, first you must put the rabbit in the hat. And this is where assumptions come into the picture. The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger has highlighted, this is a particularly questionable rabbit-pulling assumption: Inferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Game theory — a waste of time on a staggering scale

Editor writes Internalizing “externalities”

Joel Eissenberg writes The end of IVF in Alabama?

Stavros Mavroudeas writes Κυριακή 18 Φεβρουαρίου, 6:30 μ.μ. – Διαδικτυακή Εκδήλωση της Δ.Ο.Ε. «Όροι διαβίωσης και συνθήκες εργασίας των εκπαιδευτικών. Η επίδρασή τους στην εκπαιδευτική διαδικασία»

from Lars Syll

In econometrics one often gets the feeling that many of its practitioners think of it as a kind of automatic inferential machine: input data and out comes causal knowledge. This is like pulling a rabbit from a hat. Great, but as renowned statistician David Freedman had it, first you must put the rabbit in the hat. And this is where assumptions come into the picture.

The assumption of imaginary ‘superpopulations’ is one of the many dubious assumptions used in modern econometrics, and as Clint Ballinger has highlighted, this is a particularly questionable rabbit-pulling assumption:

Freedman’s Rabbit TheoremInferential statistics are based on taking a random sample from a larger population … and attempting to draw conclusions about a) the larger population from that data and b) the probability that the relations between measured variables are consistent or are artifacts of the sampling procedure.

However, in political science, economics, development studies and related fields the data often represents as complete an amount of data as can be measured from the real world (an ‘apparent population’). It is not the result of a random sampling from a larger population. Nevertheless, social scientists treat such data as the result of random sampling. 

Because there is no source of further cases a fiction is propagated—the data is treated as if it were from a larger population, a ‘superpopulation’ where repeated realizations of the data are imagined. Imagine there could be more worlds with more cases and the problem is fixed …

What ‘draw’ from this imaginary superpopulation does the real-world set of cases we have in hand represent? This is simply an unanswerable question. The current set of cases could be representative of the superpopulation, and it could be an extremely unrepresentative sample, a one in a million chance selection from it …

The problem is not one of statistics that need to be fixed. Rather, it is a problem of the misapplication of inferential statistics to non-inferential situations.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *