Sunday , April 28 2024
Home / Lars P. Syll / Causal inference

Causal inference

Summary:
Causal effects are comparisons of what did happen with what would have happened if people had received different treatments. Randomized treatment assignment has reduced this problem to the minor technical problem of drawing an inference about a finite population of people on the basis of a probability sample from that population. Expressed differently, if we design an experiment so that the actual world is a random draw from a set of possible worlds, then we can draw inferences about certain aspects of worlds that were never realized. Assuming that “the actual world is a random draw from a set of possible worlds” — is actually the core assumption of Haavelmo’s probabilistic revolution in econometrics. To Haavelmo and his modern followers, econometrics is not really in the

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes The importance of ‘causal spread’

Lars Pålsson Syll writes Applied econometrics — a messy business

Lars Pålsson Syll writes Feynman’s trick (student stuff)

Lars Pålsson Syll writes Difference in Differences (student stuff)

Causal inferenceCausal effects are comparisons of what did happen with what would have happened if people had received different treatments. Randomized treatment assignment has reduced this problem to the minor technical problem of drawing an inference about a finite population of people on the basis of a probability sample from that population. Expressed differently, if we design an experiment so that the actual world is a random draw from a set of possible worlds, then we can draw inferences about certain aspects of worlds that were never realized.

Assuming that “the actual world is a random draw from a set of possible worlds” — is actually the core assumption of Haavelmo’s probabilistic revolution in econometrics.

To Haavelmo and his modern followers, econometrics is not really in the truth business. The explanations we can give of economic relations and structures based on econometric models are “not hidden truths to be discovered” but rather our own “artificial inventions”. Models are consequently perceived not as true representations of the data-generating processes, but rather instrumentally conceived ‘as if’-constructs. Their ‘intrinsic closure’ is realized by searching for parameters showing “a great degree of invariance” or relative autonomy and the ‘extrinsic closure’ by hoping that the ‘practically decisive’ explanatory variables are relatively few, so that one may proceed “as if … natural limitations of the number of relevant factors exist” [Haavelmo, ‘The probability approach in econometrics’, Supplement to Econometrica 12, 1944, p. 29].

But why the ‘logically conceivable’ really should turn out to be the case is difficult to see. In real economies, it is unlikely that we find many ‘autonomous’ relations and events. And one could of course also raise the objection that invoking a probabilistic approach to econometrics presupposes, e.g., that we have to be able to describe the world in terms of risk rather than genuine uncertainty.

And that is exactly what Haavelmo [1944:48] does: “To make this a rational problem of statistical inference we have to start out by an axiom, postulating that every set of observable variables has associated with it one particular ‘true’, but unknown, probability law.”

But to use this “trick of our own” and just assign “a certain probability law to a system of observable variables”, however, cannot — just as little as pure hope — build a firm bridge between model and reality. Treating phenomena as if they essentially were stochastic processes is not the same as showing that they essentially are stochastic processes.

Rigour and elegance in the analysis do not make up for the gap between reality and model. It is the distribution of the phenomena in itself and not its estimation that ought to be at the centre of the stage. A crucial ingredient to any economic theory that wants to use probabilistic models should be a convincing argument for the view that “there can be no harm in considering economic variables as stochastic variables” [Haavelmo,  ‘Statistical testing of business-cycle theories’, The Review of  Economics and Statistics 25, 1943, p. 3]. In most cases, no such arguments are given.

The rather one-sided emphasis of usefulness and its concomitant instrumentalist justification cannot hide that neither Haavelmo nor the legions of probabilistic econometricians following in his footsteps, give supportive evidence for their considering it “fruitful to believe” in the possibility of treating unique economic data as the observable results of random drawings from an imaginary sampling of an imaginary population.

The kinds of laws and relations that econom(etr)ics has established, are laws and relations about entities in models that presuppose causal mechanisms being atomistic and additive. When causal mechanisms operate in real-world social target systems they only do it in ever-changing and unstable combinations where the whole is more than a mechanical sum of parts. If economic regularities obtain they do it (as a rule) only because we engineered them for that purpose. Outside man-made ‘nomological machines’ they are rare, or even non-existent. Unfortunately, that also makes most of the achievements of econometrics — as most of the contemporary endeavours of economic theoretical modelling — rather useless.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *