Tuesday , November 5 2024
Home / Real-World Economics Review / Why policy design without theory is useless

Why policy design without theory is useless

Summary:
From Lars Syll Taking into account the methodologies that support some policy practices that favour inductive reasoning and randomized control trials of impact evaluation (RCTs), there is a controversy around the utilization of these attempts to build experimental programmes or policy intervention … As the decision-making policy process in the real world relies on institutional factors that may be different elsewhere, the methodology based on RCTs does not provide a credible basis for policy making. In short, the outcomes of inductive investigation can never be completely transported across time and space … In fact, the methodology of RCTs runs the risk of considering worthless casual relationships as relevant causalities in the attempt to develop policy recommendations. In short, the

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Merijn T. Knibbe writes ´Fryslan boppe´. An in-depth inspirational analysis of work rewarded with the 2024 Riksbank prize in economic sciences.

Peter Radford writes AJR, Nobel, and prompt engineering

Lars Pålsson Syll writes Central bank independence — a convenient illusion

Eric Kramer writes What if Trump wins?

from Lars Syll

Taking into account the methodologies that support some policy practices that favour inductive reasoning and randomized control trials of impact evaluation (RCTs), there is a controversy around the utilization of these attempts to build experimental programmes or policy intervention …

Why policy design without theory is uselessAs the decision-making policy process in the real world relies on institutional factors that may be different elsewhere, the methodology based on RCTs does not provide a credible basis for policy making. In short, the outcomes of inductive investigation can never be completely transported across time and space …

In fact, the methodology of RCTs runs the risk of considering worthless casual relationships as relevant causalities in the attempt to develop policy recommendations. In short, the use of the outcomes of RCTs as normative orientations for policy making should be put in question.

“What works” in the “sterile” environment of a laboratory does not necessarily work in a real-world where social interactions and the dynamics of institutions are overwhelmed by power relations. Therefore, ethical considerations should be considered in any attemp to build policy proposals.

Indeed, the transformation of the economic policy approach has evidently been a remarkable one. It is worth recalling the words of Lars Syll about the current sad state of economics as a science,

“A science that doesn’t self-reflect and asks important methodological and science-theoretical questions about the own activity, is a science in dire straits. The main reason why mainstream economics has increasingly become more and more useless as a public policy instrument is to be found in its perverted view on the value of methodology.”

Maria Alejandra Madi / WEA Pedagogy Blog

Evidence-based theories and policies are highly valued nowadays. Randomization is supposed to control for bias from unknown confounders. The received opinion is that evidence-based on randomized experiments, therefore, is the best.

More and more economists have also lately come to advocate randomization as the principal method for ensuring being able to make valid causal inferences.

I would however rather argue that randomization, just as econometrics, promises more than it can deliver, basically because it requires assumptions that in practice are not possible to maintain.

Especially when it comes to questions of causality, randomization is nowadays considered some kind of ‘gold standard.’ Everything has to be evidence-based, and the evidence has to come from randomized experiments.

But just as econometrics, randomization is basically a deductive method. Given the assumptions (such as manipulability, transitivity, separability, additivity, linearity, etc.) these methods deliver deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. And although randomization may contribute to controlling for confounding, it does not guarantee it, since genuine randomness presupposes infinite experimentation and we know all real experimentation is finite. And even if randomization may help to establish average causal effects, it says nothing of individual effects unless homogeneity is added to the list of assumptions. Real target systems are seldom epistemically isomorphic to our axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by randomization procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.

When does a conclusion established in population X hold for target population Y? Only under very restrictive conditions!

Ideally controlled experiments (still the benchmark even for natural and quasi experiments) tell us with certainty what causes what effects – but only given the right ‘closures.’ Making appropriate extrapolations from (ideal, accidental, natural or quasi) experiments to different settings, populations or target systems, is not easy. ‘It works there’ is no evidence for ‘it will work here.’ Causes deduced in an experimental setting still have to show that they come with an export-warrant to the target population/system. The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods is despairingly small.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *