Tuesday , November 19 2019
Home / Lars P. Syll / Unpacking the ‘Nobel prize’ in economics

Unpacking the ‘Nobel prize’ in economics

Summary:
Unpacking the ‘Nobel prize’ in economics In a 2017 speech, Duflo famously likened economists to plumbers. In her view the role of an economist is to solve real world problems in specific situations. This is a dangerous assertion, as it suggests that the “plumbing” the randomistas are doing is purely technical, and not guided by theory or values. However, the randomistas’ approach to economics is not objective, value-neutral, nor pragmatic, but rather, rooted in a particular theoretical framework and world view – neoclassical microeconomic theory and methodological individualism. The experiments’ grounding has implications for how experiments are designed and the underlying assumptions about individual and collective behavior that are made. Perhaps the

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Economists — a perniciously overconfident tribe

Lars Pålsson Syll writes ‘Nobel prize’ winning plumbers

Lars Pålsson Syll writes Skidelsky on the uselessness of ‘New Keynesian’ economics

Lars Pålsson Syll writes The past and future of economics

Unpacking the ‘Nobel prize’ in economics

Unpacking the ‘Nobel prize’ in economicsIn a 2017 speech, Duflo famously likened economists to plumbers. In her view the role of an economist is to solve real world problems in specific situations. This is a dangerous assertion, as it suggests that the “plumbing” the randomistas are doing is purely technical, and not guided by theory or values. However, the randomistas’ approach to economics is not objective, value-neutral, nor pragmatic, but rather, rooted in a particular theoretical framework and world view – neoclassical microeconomic theory and methodological individualism.

The experiments’ grounding has implications for how experiments are designed and the underlying assumptions about individual and collective behavior that are made. Perhaps the most obvious example of this is that the laureates often argue that specific aspects of poverty can be solved by correcting cognitive biases. Unsurprisingly, there is much overlap between the work of randomistas and the mainstream behavioral economists, including a focus on nudges that may facilitate better choices on the part of people living in poverty.

Another example is Duflo’s analysis of women empowerment. Naila Kabeer argues that it employs an understanding of human behavior “uncritically informed by neoclassical microeconomic theory.” Since all behavior can allegedly be explained as manifestations of individual maximizing behavior, alternative explanations are dispensed with. Because of this, Duflo fails to understand a series of other important factors related to women’s empowerment, such as the role of sustained struggle by women’s organizations for rights or the need to address unfair distribution of unpaid work that limits women’s ability to participate in the community.

Ingrid Harvold Kvangraven

Nowadays many mainstream economists maintain that ‘imaginative empirical methods’ — such as natural experiments, field experiments, lab experiments, RCTs — can help us to answer questions concerning the external validity of economic models. In their view, they are more or less tests of ‘an underlying economic model’ and enable economists to make the right selection from the ever-expanding ‘collection of potentially applicable models.’

When looked at carefully, however, there are in fact few real reasons to share this optimism on the alleged ’empirical turn’ in economics.

If we see experiments or field studies as theory tests or models that ultimately aspire to say something about the real ‘target system,’ then the problem of external validity is central.

Assume that you have examined how the performance of a group of people (A) is affected by a specific ‘treatment’ (B). How can we extrapolate/generalize to new samples outside the original population? How do we know that any replication attempt ‘succeeds’? How do we know when these replicated experimental results can be said to justify inferences made in samples from the original population? If, for example, P(A|B) is the conditional density function for the original sample, and we are interested in doing an extrapolative prediction of E [P'(A|B)], how can we know that the new sample’s density function is identical with the original? Unless we can give some really good argument for this being the case, inferences built on P(A|B) is not really saying anything on that of the target system’s P'(A|B).

External validity/extrapolation/generalization is founded on the assumption that we can make inferences based on P(A|B) that is exportable to other populations for which P'(A|B) applies. Sure, if one can convincingly show that P and P’ are similar enough, the problems are perhaps surmountable. But arbitrarily just introducing functional specification restrictions of the type invariance/stability /homogeneity, is far from satisfactory. And often it is – unfortunately – exactly this that I see when I study mainstream economists’ RCTs and ‘experiments.’

Many ‘experimentalists’ claim that it is easy to replicate experiments under different conditions and therefore a fortiori easy to test the robustness of experimental results. But is it really that easy? Population selection is almost never simple. Had the problem of external validity only been about inference from sample to population, this would be no critical problem. But the really interesting inferences are those we try to make from specific labs/experiments/fields to specific real-world situations/institutions/ structures that we are interested in understanding or (causally) to explain. And then the population problem is more difficult to tackle.

Unpacking the ‘Nobel prize’ in economicsDuflo sees development as the implementation and replication of expert-led fixes to provide basic goods for the poor who are often blinded by their exacting situation. It is a technical quest for certainty and optimal measures in a fairly static framework.

In Duflo’s science-based ‘benevolent paternalism’, the experimental technique works as an ‘anti-politics machine’ … social goals being predefined and RCT outcomes settling ideally ambiguities and conflicts. Real-world politics – disregarding or instrumentalising RCTs – and institutions – resulting from social compromises instead of evidence – are thus often perceived as external disturbances and constraints to economic science and evidence-based policy.

Agnès Labrousse

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *