Sunday , November 24 2024
Home / Lars P. Syll / Taking the con out of RCTs

Taking the con out of RCTs

Summary:
Taking the con out of RCTs Development actions and interventions (policies/programs/projects/practices) should be based on the evidence. This truism now comes with a radical proposal about the meaning of “the evidence.” In development practice, where there are hundreds of complex, sometimes rapidly changing, contexts seemingly innocuous phrases like “rely on the rigorous evidence” are taken to mean: “Ignore evidence from your context and rely in your context on evidence that was ‘rigorous’ for another place, another time, another implementing organization, another set of interacting policies, another set of local social norms, another program design and do this without any underlying model or theory that guides your understanding of the relevant

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Klas Eklunds ‘Vår ekonomi’ — lärobok med stora brister

Lars Pålsson Syll writes Ekonomisk politik och finanspolitiska ramverk

Lars Pålsson Syll writes NAIRU — a harmful fairy tale

Lars Pålsson Syll writes Isabella Weber on sellers inflation

Taking the con out of RCTs

Taking the con out of RCTsDevelopment actions and interventions (policies/programs/projects/practices) should be based on the evidence. This truism now comes with a radical proposal about the meaning of “the evidence.” In development practice, where there are hundreds of complex, sometimes rapidly changing, contexts seemingly innocuous phrases like “rely on the rigorous evidence” are taken to mean: “Ignore evidence from your context and rely in your context on evidence that was ‘rigorous’ for another place, another time, another implementing organization, another set of interacting policies, another set of local social norms, another program design and do this without any underlying model or theory that guides your understanding of the relevant phenomena.” …

The advocates of RCTs and the use and importance of rigorous evidence, who are mostly full-time academics based in universities, have often taken a condescending, if not outright ad hominem, stance towards development practitioners. They have often treated arguments against exclusive reliance on RCT evidence, like that the world is complex, getting things done in the real world is a difficult craft, that RCTs don’t address key issues, that results cannot be transplanted across contexts, not as legitimate arguments but as the self-interested pleadings of “bureaucrats” who don’t care about “the evidence” or development outcomes. Therefore, it is striking that it is the practitioner objections about external validity that are actually technically right about the unreliability of RCTs for making context-specific predictions and it is the academics that are wrong, and this in the technical domain that supposedly is the acamedicians comparative advantage.

Lant Pritchett

Just as econometrics, the use of randomization often promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain. And just like econometrics, randomization is basically a deductive method. Given the assumptions, these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine ramdomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

As Pritchett shows, does a conclusion established in population X hold for target population Y only under very restrictive conditions. ‘Ideally controlled experiments’ tell us with certainty what causes what effects — but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here”. Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods — and ‘on-average-knowledge’ — is despairingly small.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *