Hunting for causes (wonkish) There are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size. Second, statistical
Topics:
Lars Pålsson Syll considers the following as important: Theory of Science & Methodology
This could be interesting, too:
Lars Pålsson Syll writes Randomization and causal claims
Lars Pålsson Syll writes Race and sex as causes
Lars Pålsson Syll writes Randomization — a philosophical device gone astray
Lars Pålsson Syll writes Keynes on the importance of ‘causal spread’
Hunting for causes (wonkish)
There are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.
Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Again, this difference becomes illuminated among Bayesians, who are accustomed to accepting untested, judgmental assumptions, and should therefore invite causal assumptions with open arms—they don’t. A Bayesian is prepared to accept an expert’s judgment, however esoteric and untestable, so long as the judgment is wrapped in the safety blanket of a probability expression. Bayesians turn extremely suspicious when that same judgment is cast in plain English, as in “mud does not cause rain” …
The third resistance to causal (vis-a-vis statistical) assumptions stems from their intimidating clarity. Assumptions about abstract properties of density functions or about conditional independencies among variables are, cognitively speaking, rather opaque, hence they tend to be forgiven, rather than debated. In contrast, assumptions about how variables cause one another are shockingly transparent, and tend therefore to invite counter-arguments and counter-hypotheses.
Pearl’s seminal contributions to this research field is well-known and indisputable. But on the ‘taming’ and ‘resolve’ of the issues, yours truly however has to admit that — under the influence of especially David Freedman and Nancy Cartwright — he still has some doubts on the reach, especially in terms of realism and relevance, of his ‘do-calculus solutions’ for social sciences in general and economics in specific (see here, here, here and here). The distinction between the causal — ‘interventionist’ — E[Y|do(X)] and the more traditional statistical — ‘conditional expectationist’ — E[Y|X] is crucial, but Pearl and his associates, although they have fully explained why the first is so important, have to convince us that it (in a relevant way) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to socio-economic contexts where ‘surgery’, ‘hypothetical minimal interventions’, ‘manipulativity’, and ‘modularity’ are not perhaps so universally at hand.
What capacity a treatment has to contribute to an effect for an individual depends on the underlying structures – physiological, material, psychological, cultural and economic – that makes some causal pathways possible for that individual and some not, some likely and some unlikely. This is a well recognised problem when it comes to making inferences from model organisms to people. But it is equally a problem in making inferences from one person to another or from one population to another. Yet in these latter cases it is too often downplayed. When the problem is explicitly noted, it is often addressed by treating the underlying structures as moderators in the potential outcomes equation: give a name to a structure-type – men/women, old/young, poor/well off, from a particular ethnic background, member of a particular religious or cultural group, urban/rural, etc. Then introduce a yes-no moderator variable for it. Formally this can be done, and sometimes it works well enough. But giving a name to a structure type does nothing towards telling us what the details of the structure are that matter nor how to identify them. In particular, the usual methods for hunting moderator variables, like subgroup analysis, are of little help in uncovering what the aspects of a structure are that afford the causal pathways of interest. Getting a grip on what structures support similar causal pathways is central to using results from one place as evidence about another, and a casual treatment of them is likely to lead to mistaken inferences. The methodology for how to go about this is under developed, or at best under articulated, in EBM, possibly because it cannot be well done with familiar statistical methods and the ways we use to do it are not manualizable. It may be that medicine has fewer worries here than do social science and social policy, due to the relative stability of biological structures and disease processes. But this is no excuse for undefended presumptions about structural similarity.