What do RCTs reveal about causality? The insight critique contested the proposition that RCTs had revealed significant new facts or provided new understanding of development processes … But closer inspection reveals that they most often merely provide a validation of common sense. Whereas at times randomization seemed to reveal something surprising … in other instances it simply told us what had been long expected … One such finding—that providing preventative public health treatments at low or no cost, or better yet with incentives, leads to an increase in the number of people willing to accept them—is cited by the prize committee as having led to a change in the received wisdom in favor of user fees in primary health. This gets the history quite wrong,
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes En statsbudget för Sveriges bästa
Lars Pålsson Syll writes MMT — debunking the deficit myth
Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten
Peter Radford writes AJR, Nobel, and prompt engineering
What do RCTs reveal about causality?
The insight critique contested the proposition that RCTs had revealed significant new facts or provided new understanding of development processes … But closer inspection reveals that they most often merely provide a validation of common sense. Whereas at times randomization seemed to reveal something surprising … in other instances it simply told us what had been long expected … One such finding—that providing preventative public health treatments at low or no cost, or better yet with incentives, leads to an increase in the number of people willing to accept them—is cited by the prize committee as having led to a change in the received wisdom in favor of user fees in primary health. This gets the history quite wrong, since such fees had long before that lost favor …
RCTs cannot reveal very much about causal processes since at their core they are designed to determine whether something has an effect, not how. The randomistas have attempted to deal with this charge by designing studies to interpret whether variations in the treatment have different effects, but this requires a prior conception of what the causal mechanisms are. The lack of understanding of causation can limit the value of any insights derived from RCTs in understanding economic life or in designing further policies and interventions. Ultimately, the randomistas tested what they thought was worth testing, and this revealed their own preoccupations and suppositions, contrary to the notion that they spent countless hours listening to and in close contact with the poor …
If RCTs now “entirely dominate” development economics, or worse, provide the basis for development policymaking, that is no cause for celebration. The roaring success of the randomistas tells us most of all about the historical moment in which they came to prominence: one in which defeatism or cynicism about public initiatives on a larger scale has been replaced by a focus on what works at the level of individuals and communities. But even there, what does work, really, remains an open question. The difficult question of how to fix broken institutions and help societies function better requires going beyond a biomedical metaphor of taking the right pill. Nobel or not, the debate must continue.
The problem many ‘randomistas’ — like this year’s ‘Nobel prize’ winners in economics; Duflo, Banerjee and Kremer — end up with when underestimating heterogeneity and interaction is not only an external validity problem when trying to ‘export’ regression results to different times or different target populations. It is also often an internal problem to the millions of regression estimates that economists produce every year.
‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.
RCTs have very little reach beyond giving descriptions of what has happened in the past. From the perspective of the future and for policy purposes they are as a rule of limited value since they cannot tell us what background factors were held constant when the trial intervention was being made.
RCTs usually do not provide evidence that the results are exportable to other target systems. RCTs cannot be taken for granted to give generalizable results. That something works somewhere for someone is no warranty for us to believe it to work for us here or even that it works generally.