Angus Deaton on the limited value of RCTs I think economists, especially development economists, are sort of like economists in the 50’s with regressions. They have a magic tool but they don’t yet have much of an idea of the problems with that magic tool. And there are a lot of them. I think it’s just like any other method of estimation, it has its advantages and disadvantages. I think RCTs rarely meet the hype. People turned to RCTs because they got tired of all the arguments over observational studies about exogeneity and instruments and sample selectivity and all the rest of it. But all of those problems come back in somewhat different forms in RCTs … People tend to split the issues into internal and external validity. There are a lot of problems with that distinction but it is a way of thinking about some of the issues. For instance if you go back to the 70s and 80s and you read what was written then, people thought quite hard about how you take the result from one experiment and how it would apply somewhere else. I see much too little of that in the development literature today … For instance, in a newspaper story about economists’ experiments that I read today, a reporter wrote that an RCT allows you to establish causality for sure. But that statement is absurd.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Andreas Cervenka och den svenska bostadsbubblan
Lars Pålsson Syll writes Debunking the balanced budget superstition
Lars Pålsson Syll writes How inequality causes financial crises
Lars Pålsson Syll writes Income inequality and the saving glut of the rich
Angus Deaton on the limited value of RCTs
I think economists, especially development economists, are sort of like economists in the 50’s with regressions. They have a magic tool but they don’t yet have much of an idea of the problems with that magic tool. And there are a lot of them. I think it’s just like any other method of estimation, it has its advantages and disadvantages. I think RCTs rarely meet the hype. People turned to RCTs because they got tired of all the arguments over observational studies about exogeneity and instruments and sample selectivity and all the rest of it. But all of those problems come back in somewhat different forms in RCTs …
People tend to split the issues into internal and external validity. There are a lot of problems with that distinction but it is a way of thinking about some of the issues. For instance if you go back to the 70s and 80s and you read what was written then, people thought quite hard about how you take the result from one experiment and how it would apply somewhere else. I see much too little of that in the development literature today …
For instance, in a newspaper story about economists’ experiments that I read today, a reporter wrote that an RCT allows you to establish causality for sure. But that statement is absurd. There’s a standard error, for a start, and there are lots of cases where it is hard to get the standard errors right. And even if we have causality, we need an argument that causality will work in the same way somewhere else, let alone in general.
I think we are in something of a mess on this right now. There’s just a lot of stuff that’s not right. There is this sort of belief in magic, that RCTs are attributed with properties that they do not possess. For example, RCTs are supposed to automatically guarantee balance between treatment and controls. And there is an almost routine confusion that RCTs are somehow reliable, or that unbiasedness implies reliability …
We often find a randomized control trial with only a handful of observations in each arm and with enormous standard errors. But that’s preferred to a potentially biased study that uses 100 million observations. That just makes no sense. Each study has to be considered on its own. RCTs are fine, but they are just one of the techniques in the armory that one would use to try to discover things. Gold standard thinking is magical thinking.