Friday , June 25 2021
Home / Real-World Economics Review / Hunting for causes (wonkish)

Hunting for causes (wonkish)

Summary:
From Lars Syll There are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size. Second, statistical assumptions can be expressed in the familiar language

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

John Quiggin writes We don’t need CRT, but we need to think critically about race

John Quiggin writes Just who is the religious freedom protection legislation designed to protect?

Editor writes Effort to save humankind from impending catastrophe

Editor writes Total wealth by country in 2019

from Lars Syll

Hunting for causes (wonkish)There are three fundamental differences between statistical and causal assumptions. First, statistical assumptions, even untested, are testable in principle, given sufficiently large sample and sufficiently fine measurements. Causal assumptions, in contrast, cannot be verified even in principle, unless one resorts to experimental control. This difference is especially accentuated in Bayesian analysis. Though the priors that Bayesians commonly assign to statistical parameters are untested quantities, the sensitivity to these priors tends to diminish with increasing sample size. In contrast, sensitivity to priors of causal parameters … remains non-zero regardless of (non-experimental) sample size.

Second, statistical assumptions can be expressed in the familiar language of probability calculus, and thus assume an aura of scholarship and scientific respectability. Causal assumptions, as we have seen before, are deprived of that honor, and thus become immediate suspect of informal, anecdotal or metaphysical thinking. Again, this difference becomes illuminated among Bayesians, who are accustomed to accepting untested, judgmental assumptions, and should therefore invite causal assumptions with open arms—they don’t. A Bayesian is prepared to accept an expert’s judgment, however esoteric and untestable, so long as the judgment is wrapped in the safety blanket of a probability expression. Bayesians turn extremely suspicious when that same judgment is cast in plain English, as in “mud does not cause rain” …

The third resistance to causal (vis-a-vis statistical) assumptions stems from their intimidating clarity. Assumptions about abstract properties of density functions or about conditional independencies among variables are, cognitively speaking, rather opaque, hence they tend to be forgiven, rather than debated. In contrast, assumptions about how variables cause one another are shockingly transparent, and tend therefore to invite counter-arguments and counter-hypotheses.

Judea Pearl

Pearl’s seminal contributions to this research field is well-known and indisputable. But on the ‘taming’ and ‘resolve’ of the issues, yours truly however has to admit that — under the influence of especially David Freedman and Nancy Cartwright — he still has some doubts on the reach, especially in terms of realism and relevance, of Pearl’s ‘do-calculus solutions’ for social sciences in general and economics in specific (see hereherehere and here). The distinction between the causal — ‘interventionist’ — E[Y|do(X)] and the more traditional statistical — ‘conditional expectationist’ — E[Y|X] is crucial, but Pearl and his associates, although they have fully explained why the first is so important, have to convince us that it (in a relevant way) can be exported from ‘engineer’ contexts where it arguably easily and universally apply, to socio-economic contexts where ‘surgery’, ‘hypothetical minimal interventions’, ‘manipulativity’, ‘faithfulness’, ‘stability’, and ‘modularity’ are not perhaps so universally at hand.

Hunting for causes (wonkish)What capacity a treatment has to contribute to an effect for an individual depends on the underlying structures – physiological, material, psychological, cultural and economic – that makes some causal pathways possible for that individual and some not, some likely and some unlikely. This is a well recognised problem when it comes to making inferences from model organisms to people. But it is equally a problem in making inferences from one person to another or from one population to another. Yet in these latter cases it is too often downplayed. When the problem is explicitly noted, it is often addressed by treating the underlying structures as moderators in the potential outcomes equation: give a name to a structure-type – men/women, old/young, poor/well off, from a particular ethnic background, member of a particular religious or cultural group, urban/rural, etc. Then introduce a yes-no moderator variable for it. Formally this can be done, and sometimes it works well enough. But giving a name to a structure type does nothing towards telling us what the details of the structure are that matter nor how to identify them. In particular, the usual methods for hunting moderator variables, like subgroup analysis, are of little help in uncovering what the aspects of a structure are that afford the causal pathways of interest. Getting a grip on what structures support similar causal pathways is central to using results from one place as evidence about another, and a casual treatment of them is likely to lead to mistaken inferences. The methodology for how to go about this is under developed, or at best under articulated, in EBM, possibly because it cannot be well done with familiar statistical methods and the ways we use to do it are not manualizable. It may be that medicine has fewer worries here than do social science and social policy, due to the relative stability of biological structures and disease processes. But this is no excuse for undefended presumptions about structural similarity.

Nancy Cartwright

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *