Sunday , November 24 2024

Poor economics

Summary:
From Lars Syll Few volumes in contemporary economics have been more lauded, and have summarised a zeitgeist, as much as Abhijit Banerjee and Esther Duflo’s Poor Economics … The implicit premise of the book is that interventions that work in one place can be expected to work in another. This presumes not only that the results of such “micro” interventions are substantially independent of the “macro” context, but also that a focus on such interventions, as opposed to those which reshape that context, is sufficient to address poverty. These premises of “separability” and “sufficiency”, although non-trivial, go largely undiscussed by the authors. The causal relations at work in relation to individuals or households cannot be understood in atomic isolation … Not surprisingly, one

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

John Quiggin writes Trump’s dictatorship is a fait accompli

Peter Radford writes Election: Take Four

Merijn T. Knibbe writes Employment growth in Europe. Stark differences.

Merijn T. Knibbe writes In Greece, gross fixed investment still is at a pre-industrial level.

from Lars Syll

Few volumes in contemporary economics have been more lauded, and have summarised a zeitgeist, as much as Abhijit Banerjee and Esther Duflo’s Poor Economics …

Poor economicsThe implicit premise of the book is that interventions that work in one place can be expected to work in another. This presumes not only that the results of such “micro” interventions are substantially independent of the “macro” context, but also that a focus on such interventions, as opposed to those which reshape that context, is sufficient to address poverty. These premises of “separability” and “sufficiency”, although non-trivial, go largely undiscussed by the authors. The causal relations at work in relation to individuals or households cannot be understood in atomic isolation …

Not surprisingly, one consequence of the approach to development economics championed by the authors is that the questions asked by the discipline have become much smaller. The authors’ position appears to be that this is quite all right, since the small questions are in fact large in importance. It is not easy to accept this, however. The larger questions once asked within the discipline … have been pushed to the background in favour of such questions as whether bed-nets dipped in insecticide should be distributed free of charge or not, or whether two schoolteachers in the classroom are much better than one …

One may argue, in fact, that the style of metropolitan development economics celebrated in this book leads not so much to increasing rigour as to rigor mortis, by severely limiting the questions that can be asked and shoring up a practical philosophy that is quiescent in relation to many important questions that cannot readily be analysed using the authors’ favoured method. These include questions related to the structure and dynamics of markets, governmental institutions, macroeconomic policies, the workings of social classes, castes, and networks, and so forth. Although such questions can only be approached through other methods, they are not the less important for that.

Sanjay Reddy

Most ‘randomistas’ — like Duflo and Banerjee — argue that since random or as-if random assignment in natural experiments obviates the need for controlling potential confounders, this kind of “simple and transparent” design-based research method is preferable to more traditional multivariate regression analysis where the controlling only comes in ex post via statistical modelling.

The point of making a randomized experiment is often said to be that it ‘ensures’ that any correlation between a supposed cause and effect indicates a causal relation. This is believed to hold since randomization (allegedly) ensures that a supposed causal variable does not correlate with other variables that may influence the effect.

The problem with that simplistic view on randomization is that the claims made are exaggerated and sometimes even false:

• Even if you manage to do the assignment to treatment and control groups ideally random, the sample selection certainly is — except in extremely rare cases — not random. Even if we make a proper randomized assignment, if we apply the results to a biased sample, there is always the risk that the experimental findings will not apply. What works ‘there,’ does not work ‘here.’ Randomization hence does not ‘guarantee ‘ or ‘ensure’ making the right causal claim. Although randomization may help us rule out certain possible causal claims, randomization per se does not guarantee anything!

• Even if both sampling and assignment are made in an ideal random way, performing standard randomized experiments only give you averages. The problem here is that although we may get an estimate of the ‘true’ average causal effect, this may ‘mask’ important heterogeneous effects of a causal nature. Although we get the right answer of the average causal effect being 0, those who are ‘treated’  may have causal effects equal to -100 and those ‘not treated’ may have causal effects equal to 100. Contemplating being treated or not, most people would probably be interested in knowing about this underlying heterogeneity and would not consider the average effect particularly enlightening.

• There is almost always a trade-off between bias and precision. In real-world settings, a little bias often does not overtrump greater precision. And — most importantly — in case we have a population with sizeable heterogeneity, the average treatment effect of the sample may differ substantially from the average treatment effect in the population. If so, the value of any extrapolating inferences made from trial samples to other populations is highly questionable.

• Since most real-world experiments and trials build on performing a single randomization, what would happen if you kept on randomizing forever, does not help you to ‘ensure’ or ‘guarantee’ that you do not make false causal conclusions in the one particular randomized experiment you actually do perform. It is indeed difficult to see why thinking about what you know you will never do, would make you happy about what you actually do.

• And then there is also the problem that ‘Nature’ may not always supply us with the random experiments we are most interested in. If we are interested in X, why should we study Y only because design dictates that? Method should never be prioritized over substance!

Randomization is not a panacea. It is not the best method for all questions and circumstances. Proponents of randomization make claims about its ability to deliver causal knowledge that is simply wrong. There are good reasons to be sceptical of the now popular — and ill-informed — view that randomization is the only valid and the best method on the market. It is not.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *