Saturday , April 27 2024
Home / Lars P. Syll / Ignorability — a questionable assumption

Ignorability — a questionable assumption

Summary:
Ignorability — a questionable assumption Researchers adhering to missing data analysis invariably invoke an ad-hoc assumption called “conditional ignorability,” often decorated as “ignorable treatment assignment mechanism”, which is far from being “well understood” by those who make it, let alone those who need to judge its plausibility. For readers versed in graphical modeling, “conditional ignorability” is none other than the back-door criterion that students learn in the second class on causal inference, and which “missing-data” advocates have vowed to avoid at all cost. As we know, this criterion can easily be interpreted and verified when background knowledge is presented in graphical form but, as you can imagine, it turns into a frightening enigma

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes The importance of ‘causal spread’

Lars Pålsson Syll writes Applied econometrics — a messy business

Lars Pålsson Syll writes Feynman’s trick (student stuff)

Lars Pålsson Syll writes Difference in Differences (student stuff)

Ignorability — a questionable assumption

Researchers adhering to missing data analysis invariably invoke an ad-hoc assumption called “conditional ignorability,” often decorated as “ignorable treatment assignment mechanism”, which is far from being “well understood” by those who make it, let alone those who need to judge its plausibility.

Ignorability — a questionable assumptionFor readers versed in graphical modeling, “conditional ignorability” is none other than the back-door criterion that students learn in the second class on causal inference, and which “missing-data” advocates have vowed to avoid at all cost. As we know, this criterion can easily be interpreted and verified when background knowledge is presented in graphical form but, as you can imagine, it turns into a frightening enigma for those who shun the light of graphs. Still, the simplicity of reading this criterion off a graph makes it easy to test whether those who rely heavily on ignorability assumptions know what they are assuming. The results of this test are discomforting …

Unfortunately, the mantra: “missing data analysis in causal inference is well understood” continues to be chanted at an ever increasing intensity, building faith among the faithful, and luring chanters to assume ignorability as self evident. Worse yet, the mantra blinds researchers from seeing how an improved level of understanding can emerge by abandoning the missing-data prism altogether, and conducting causal analysis in its natural habitat, using scientific models of reality rather than unruly patterns of missingness in the data …

We come now to claim (2), concerning the possibility of causality-free interpretation of missing data problems. It is possible indeed to pose a missing data problem in purely statistical terms, totally void of “missingness mechanism” vocabulary, void even of conditional independence assumptions. But this is rarely done, because the answer is trivial: none of the parameters of interest would be estimable without such assumptions (i.e, the likelihood function is flat). In theory, one can argue that there is really nothing causal about “missingness mechanism” as conceptualized by Rubin (1976), since it is defined in terms of conditional independence relations, a purely statistical notion that requires no reference to causation.

Not quite! The conditional independence relations that define missingness mechanisms are fundamentally different from those invoked in standard statistical analysis. In standard statistics, independence assumptions are presumed to hold in the distribution that governs the observed data, whereas in missing-data problems, the needed independencies are assumed to hold in the distribution of variables which are only partially observed. In other words, the independence assumptions invoked in missing data analysis are necessarily judgmental, and only rarely do they have
testable implications in the available data …

We hope these arguments convince even the staunchest missing data enthusiast to switch mantras and treat missing data problems for what they are: causal inference problems.

Judea Pearl & Karthika Mohan

An interesting article on a highly questionable assumption used in ‘potential outcome’ causal models.

It also exemplifies how tractability often has come to override reality​ and truth in science.

Not least in modern mainstream economics.

A ‘tractable’ model is of course great since it usually means you can solve it. But — using ‘simplifying’ tractability assumptions (rational expectations, common knowledge, representative agents, linearity, additivity, ergodicity, exchangeability, ignorability, etc.) because otherwise they cannot ‘manipulate’ their models or come up with ‘rigorous ‘ and ‘precise’ predictions and explanations, does not exempt scientists from having to justify their modelling choices. Being able to ‘manipulate’ things in models cannot per se be enough to warrant a methodological choice. Suppose economists do not really think their tractability assumptions make for good and realist models. In that case, it is certainly a just question to ask for clarification of the ultimate goal of the whole modelling endeavour.

Take for example the ongoing discussion on rational expectations as a modelling assumption in economics. Those who want to build macroeconomics on microfoundations usually maintain that the only robust policies are those based on rational expectations and representative actors models. As yours truly has tried to show in On the use and misuse of theories and models in mainstream economics there is really no support for this conviction at all. If microfounded macroeconomics has nothing to say about the real world and the economic problems out there, why should we care about it? The final court of appeal for macroeconomic models is not if we — once we have made our tractability assumptions — can ‘manipulate’ them, but the real world. And as long as no convincing justification is put forward for how the inferential bridging de facto is made, macroeconomic modelbuilding is little more than hand-waving that gives us rather a little warrant for making inductive inferences from models to real-world target systems. If substantive questions about the real world are being posed, it is the formalistic-mathematical representations utilized to analyze them that have to match reality, not the other way around.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *