Wednesday , November 6 2024
Home / Lars P. Syll / Angus Deaton rethinking economics

Angus Deaton rethinking economics

Summary:
Angus Deaton rethinking economics Like many others, I have recently found myself changing my mind, a discomfiting process for someone who has been a practicing economist for more than half a century. I will come to some of the substantive topics, but I start with some general failings. I do not include the corruption allegations that have become common in some debates. Even so, economists, who have prospered mightily over the past half century, might fairly be accused of having a vested interest in capitalism as it currently operates. I should also say that I am writing about a (perhaps nebulous) mainstream, and that there are many nonmainstream economists. Power: Our emphasis on the virtues of free, competitive markets and exogenous technical change can

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes En statsbudget för Sveriges bästa

Lars Pålsson Syll writes MMT — debunking the deficit myth

Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten

Peter Radford writes AJR, Nobel, and prompt engineering

Angus Deaton rethinking economics

Angus Deaton rethinking economicsLike many others, I have recently found myself changing my mind, a discomfiting process for someone who has been a practicing economist for more than half a century. I will come to some of the substantive topics, but I start with some general failings. I do not include the corruption allegations that have become common in some debates. Even so, economists, who have prospered mightily over the past half century, might fairly be accused of having a vested interest in capitalism as it currently operates. I should also say that I am writing about a (perhaps nebulous) mainstream, and that there are many nonmainstream economists.

  • Power: Our emphasis on the virtues of free, competitive markets and exogenous technical change can distract us from the importance of power in setting prices and wages, in choosing the direction of technical change, and in influencing politics to change the rules of the game. Without an analysis of power, it is hard to understand inequality or much else in modern capitalism.
  • Philosophy and ethics: In contrast to economists from Adam Smith and Karl Marx through John Maynard Keynes, Friedrich Hayek, and even Milton Friedman, we have largely stopped thinking about ethics and about what constitutes human well-being. We are technocrats who focus on efficiency. We get little training about the ends of economics, on the meaning of well-being—welfare economics has long since vanished from the curriculum—or on what philosophers say about equality. When pressed, we usually fall back on an income-based utilitarianism. We often equate well-being with money or consumption, missing much of what matters to people. In current economic thinking, individuals matter much more than relationships between people in families or in communities.
  • Efficiency is important, but we valorize it over other ends. Many subscribe to Lionel Robbins’ definition of economics as the allocation of scarce resources among competing ends or to the stronger version that says that economists should focus on efficiency and leave equity to others, to politicians or administrators. But the others regularly fail to materialize, so that when efficiency comes with upward redistribution—frequently though not inevitably—our recommendations become little more than a license for plunder. Keynes wrote that the problem of economics is to reconcile economic efficiency, social justice, and individual liberty. We are good at the first, and the libertarian streak in economics constantly pushes the last, but social justice can be an afterthought. After economists on the left bought into the Chicago School’s deference to markets—“we are all Friedmanites now”—social justice became subservient to markets, and a concern with distribution was overruled by attention to the average, often nonsensically described as the “national interest.”
  • Empirical methods: The credibility revolution in econometrics was an understandable reaction to the identification of causal mechanisms by assertion, often controversial and sometimes incredible. But the currently approved methods, randomized controlled trials, differences in differences, or regression discontinuity designs, have the effect of focusing attention on local effects, and away from potentially important but slow-acting mechanisms that operate with long and variable lags. Historians, who understand about contingency and about multiple and multidirectional causality, often do a better job than economists of identifying important mechanisms that are plausible, interesting, and worth thinking about, even if they do not meet the inferential standards of contemporary applied economics.
  • Humility: We are often too sure that we are right. Economics has powerful tools that can provide clear-cut answers, but that require assumptions that are not valid under all circumstances. It would be good to recognize that there are almost always competing accounts and learn how to choose between them …

Economists could benefit by greater engagement with the ideas of philosophers, historians, and sociologists, just as Adam Smith once did. The philosophers, historians, and sociologists would likely benefit too.

Angus Deaton

A great article by a great economist!

Yours truly basically agrees with Deaton’s criticism of the general shortcomings of mainstream economics, but let me still comment on the specific criticism of ’empirical methods’.

In mainstream economics, there has been a growing interest in experiments and — not least — how to design them to possibly provide answers to questions about causality and policy effects. Economic research on discrimination nowadays often emphasizes the importance of a randomization design, for example when trying to determine to what extent discrimination can be causally attributed to differences in preferences or information, using so-called correspondence tests and field experiments.

A common starting point is the ‘counterfactual approach’ (developed mainly by Jerzy Neyman and Donald Rubin) which is often presented and discussed based on examples of randomized control studies, natural experiments, difference-in-differences, matching, regression discontinuity, etc.

Mainstream economists generally view this development of the economics toolbox positively. Yours truly — like Angus Deaton — is not entirely positive about the randomization approach.

A notable limitation of counterfactual randomization designs is that they only give us answers on how ‘treatment groups’ differ on average from ‘control groups.’ Let me give an example to illustrate how limiting this fact can be:

Among school debaters and politicians in Sweden, it is claimed that so-called ‘independent schools’ (charter schools) are better than municipal schools. They are said to lead to better results. To find out if this is really the case, a number of students are randomly selected to take a test. The result could be: Test result = 20 + 5T, where T=1 if the student attends an independent school and T=0 if the student attends a municipal school. This would confirm the assumption that independent school students have an average of 5 points higher results than students in municipal schools. Now, politicians (hopefully) are aware that this statistical result cannot be interpreted in causal terms because independent school students typically do not have the same background (socio-economic, educational, cultural, etc.) as those who attend municipal schools (the relationship between school type and result is confounded by selection bias). To obtain a better measure of the causal effects of school type, politicians suggest that 1000 students be admitted to an independent school through a lottery — a classic example of a randomization design in natural experiments. The chance of winning is 10%, so 100 students are given this opportunity. Of these, 20 accept the offer to attend an independent school. Of the 900 lottery participants who do not ‘win,’ 100 choose to attend an independent school. The lottery is often perceived by school researchers as an ‘instrumental variable,’ and when the analysis is carried out, the result is: Test result = 20 + 2T. This is standardly interpreted as having obtained a causal measure of how much better students would, on average, perform on the test if they chose to attend independent schools instead of municipal schools. But is it true? No! If not all school students have exactly the same test results (which is a rather far-fetched ‘homogeneity assumption’), the specified average causal effect only applies to the students who choose to attend an independent school if they ‘win’ the lottery, but who would not otherwise choose to attend an independent school (in statistical jargon, we call these ‘compliers’). It is difficult to see why this group of students would be particularly interesting in this example, given that the average causal effect estimated using the instrumental variable says nothing at all about the effect on the majority (the 100 out of 120 who choose to attend an independent school without ‘winning’ in the lottery) of those who choose to attend an independent school.

Conclusion: Researchers must be much more careful in interpreting ‘average estimates’ as causal. Reality exhibits a high degree of heterogeneity, and ‘average parameters’ often tell us very little!

To randomize ideally means that we achieve orthogonality (independence) in our models. But it does not mean that in real experiments when we randomize, we achieve this ideal. The ‘balance’ that randomization should ideally result in cannot be taken for granted when the ideal is translated into reality. Here, one must argue and verify that the ‘assignment mechanism’ is truly stochastic and that ‘balance’ has indeed been achieved!

Even if we accept the limitation of only being able to say something about average treatment effects there is another theoretical problem. An ideal randomized experiment assumes that a number of individuals are first chosen from a randomly selected population and then randomly assigned to a treatment group or a control group. Given that both selection and assignment are successfully carried out randomly, it can be shown that the expected outcome difference between the two groups is the average causal effect in the population. The snag is that the experiments conducted rarely involve participants selected from a random population! In most cases, experiments are started because there is a problem of some kind in a given population (e.g., schoolchildren or job seekers in country X) that one wants to address. An ideal randomized experiment assumes that both selection and assignment are randomized — this means that virtually none of the empirical results that randomization advocates so eagerly tout hold up in a strict mathematical-statistical sense. The fact that only assignment is talked about when it comes to ‘as if’ randomization in natural experiments is hardly a coincidence. Moreover, when it comes to ‘as if’ randomization in natural experiments, the sad but inevitable fact is that there can always be a dependency between the variables being studied and unobservable factors in the error term, which can never be tested!

Another significant and major problem is that researchers who use these randomization-based research strategies often set up problem formulations that are not at all the ones we really want answers to, to achieve ‘exact’ and ‘precise’ results. Design becomes the main thing, and as long as one can get more or less clever experiments in place, they believe they can draw far-reaching conclusions about both causality and the ability to generalize experimental outcomes to larger populations. Unfortunately, this often means that this type of research has a negative bias away from interesting and important problems towards prioritizing method selection. Design and research planning are important, but the credibility of research ultimately lies in being able to provide answers to relevant questions that both citizens and researchers want answers to.

Believing there is only one really good evidence-based method on the market — and that randomization is the only way to achieve scientific validity — blinds people to searching for and using other methods that in many contexts are better. Insisting on using only one tool often means using the wrong tool.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *