Non-manipulability and the limits of potential outcome models The Potential Outcome framework starts by defining the potential outcomes with reference to a manipulation. In doing so it makes a distinction between attributes or pre-treatment variables which are fixed for the units in the population, and causes, which are potentially manipulable. This is related to the connection between causal statements and randomized experiments. The causes are conceptually tied to, at the very least hypothetical, experiments. This may appear to be a disadvantage as it leads to difficulties in the PO framework when making causal statements about such attributes as race or gender … I find the position that the manipulation is irrelevant unsatisfactory, and find the
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes The history of econometrics
Lars Pålsson Syll writes What statistics teachers get wrong!
Lars Pålsson Syll writes Statistical uncertainty
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Non-manipulability and the limits of potential outcome models
The Potential Outcome framework starts by defining the potential outcomes with reference to a manipulation. In doing so it makes a distinction between attributes or pre-treatment variables which are fixed for the units in the population, and causes, which are potentially manipulable. This is related to the connection between causal statements and randomized experiments. The causes are conceptually tied to, at the very least hypothetical, experiments. This may appear to be a disadvantage as it leads to difficulties in the PO framework when making causal statements about such attributes as race or gender …
I find the position that the manipulation is irrelevant unsatisfactory, and find the insistence in the PO approach on a theoretical or practical manipulation helpful. I am not sure what is meant by do(obesity = x) if the effect of changing obesity depends on the mechanism (say, exercise, diet, or surgery), and the mechanism is not specified in the operator. It is also not obvious to me why we would care about the value of do(obesity = x) if the effect is not tied to an intervention we can envision … What is relevant for policy makers is the causal effect of such policies, not the effect of a virtual intervention that makes currently obese people suddenly like non-obese people. In my original 1995 PhD course with Don Rubin on causal inference we had a similar discussion about the “causal effect of child poverty.” From our perspective that question was ill-posed, and a better-posed question would be about the effect of a particular intervention that would make currently poor families financially better off.
Framing all causal questions as questions of manipulation or intervention runs into many problems, especially when we open up for ‘hypothetical’ and ‘symbolic’ interventions. Humans have few barriers to imagining things, but that often also makes it difficult to value the proposed thought experiments in terms of relevance. Performing ‘well-defined’ interventions is one thing, but if we do not want to give up searching for answers to the questions we are interested in and instead only search for answerable questions, interventionist studies are of limited applicability and value. Intervention effects in thought experiments are not self-evidently the causal effects we are looking for. Identifying causes (reverse causality) and measuring effects of causes (forward causality) is not the same. In social sciences, like economics, we standardly first try to identify the problem and why it occurred, and then afterwards look at the effects of the causes.
Leaning on the interventionist approach often means that instead of posing interesting questions on a social level, focus is on individuals. Instead of asking about structural socio-economic factors behind, e.g., gender or racial discrimination, the focus is on the choices individuals make (which — as yours truly maintains in his book Ekonomisk teori och metod — also tends to make the explanations presented inadequately ‘deep’). The Imbens quote above gives a good indication of the kind of only restricted causal questions you are allowed to ask within the manipulation/intervention approach to causality. A typical example of the dangers of this limiting approach is ‘Nobel prize’ winner Esther Duflo, who thinks that economics should be based on evidence from randomised experiments and field studies. Duflo et consortes want to give up on ‘big ideas’ like political economy and institutional reform and instead go for solving more manageable problems the way plumbers do. Yours truly is far from sure that is the right way to move economics forward and make it a relevant and realist science. A plumber can fix minor leaks in your system, but if the whole system is rotten, something more than good old fashion plumbing is needed. The big social and economic problems we face today are not going to be solved by plumbers performing interventions or manipulations in the form of randomised control trials.
Although, of course, it is possible (more or less, depending on context) to retrofit causal questions into a manipulation/intervention framework, before we are there, we have to agree on having identified the causal problem we try to deal with when recommending different policies. Before we can calculate causal effects we have to identify the causes, and this is perhaps not always best done within a manipulation/intervention framework. One problem is that the manipulation/intervention approach in broader social and economic contexts requires a reframing of the questions we pose, which often means that we get ‘well-defined’ causal answers, but not necessarily answers to the questions we really are interested in finding answers to. The manipulation/intervention framework is one way to do causal analysis. But it is not the way to do it. Being an advocate of ‘inference to the best explanation’ I think we also have to more carefully consider explanatory considerations when estimating and identifying causal relations.