What makes economics a science? Well, if we are to believe most mainstream economists, models are what make economics a science. In a recent Journal of Economic Literature (1/2017) review of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Jodi Beggs writes The Dawn Of 5G: Will Wireless Kill the Broadband Star?
Lars Pålsson Syll writes Seven sins of economics
Jodi Beggs writes Causal Friday: Some Real Effects of the Flint Water Crisis…
Jeff Mosenkis (IPA) writes IPA’s weekly links
What makes economics a science?
Well, if we are to believe most mainstream economists, models are what make economics a science.
In a recent Journal of Economic Literature (1/2017) review of Dani Rodrik’s Economics Rules, renowned game theorist Ariel Rubinstein discusses Rodrik’s justifications for the view that “models make economics a science.” Although Rubinstein has some doubts about those justifications — models are not indispensable for telling good stories or clarifying things in general; logical consistency does not determine whether economic models are right or wrong; and being able to expand our set of ‘plausible explanations’ doesn’t make economics more of a science than good fiction does — he still largely subscribes to the scientific image of economics as a result of using formal models that help us achieve ‘clarity and consistency’.
There’s much in the review I like — Rubinstein shows a commendable scepticism on the prevailing excessive mathematisation of economics, and he is much more in favour of a pluralist teaching of economics than most other mainstream economists — but on the core question, “the model is the message,” I beg to differ with the view put forward by both Rodrik and Rubinstein.
Economics is more than any other social science model-oriented. There are many reasons for this — the history of the discipline, having ideals coming from the natural sciences (especially physics), the search for universality (explaining as much as possible with as little as possible), rigour, precision, etc.
Mainstream economists want to explain social phenomena, structures and patterns, based on the assumption that the agents are acting in an optimizing (rational) way to satisfy given, stable and well-defined goals.
The procedure is analytical. The whole is broken down into its constituent parts so as to be able to explain (reduce) the aggregate (macro) as the result of interaction of its parts (micro).
Modern mainstream (neoclassical) economists ground their models on a set of core assumptions (CA) — basically describing the agents as ‘rational’ actors — and a set of auxiliary assumptions (AA). Together CA and AA make up what might be called the ‘ur-model’ (M) of all mainstream neoclassical economic models. Based on these two sets of assumptions, they try to explain and predict both individual (micro) and — most importantly — social phenomena (macro).
The core assumptions typically consist of:
CA1 Completeness — rational actors are able to compare different alternatives and decide which one(s) he prefers
CA2 Transitivity — if the actor prefers A to B, and B to C, he must also prefer A to C.
CA3 Non-satiation — more is preferred to less.
CA4 Maximizing expected utility — in choice situations under risk (calculable uncertainty) the actor maximizes expected utility.
CA4 Consistent efficiency equilibria — the actions of different individuals are consistent, and the interaction between them result in an equilibrium.
When describing the actors as rational in these models, the concept of rationality used is instrumental rationality – choosing consistently the preferred alternative, which is judged to have the best consequences for the actor given his in the model exogenously given wishes/interests/goals. How these preferences/wishes/interests/goals are formed is typically not considered to be within the realm of rationality, and a fortiori not constituting part of economics proper.
The picture given by this set of core assumptions (rational choice) is a rational agent with strong cognitive capacity that knows what alternatives he is facing, evaluates them carefully, calculates the consequences and chooses the one — given his preferences — that he believes has the best consequences according to him.
Weighing the different alternatives against each other, the actor makes a consistent optimizing (typically described as maximizing some kind of utility function) choice, and acts accordingly.
Beside the core assumptions (CA) the model also typically has a set of auxiliary assumptions (AA) spatio-temporally specifying the kind of social interaction between ‘rational actors’ that take place in the model. These assumptions can be seen as giving answers to questions such as
AA1 who are the actors and where and when do they act
AA2 which specific goals do they have
AA3 what are their interests
AA4 what kind of expectations do they have
AA5 what are their feasible actions
AA6 what kind of agreements (contracts) can they enter into
AA7 how much and what kind of information do they possess
AA8 how do the actions of the different individuals/agents interact with each other.
So, the ur-model of all economic models basically consists of a general specification of what (axiomatically) constitutes optimizing rational agents and a more specific description of the kind of situations in which these rational actors act (making AA serve as a kind of specification/restriction of the intended domain of application for CA and its deductively derived theorems). The list of assumptions can never be complete, since there will always unspecified background assumptions and some (often) silent omissions (like closure, transaction costs, etc., regularly based on some negligibility and applicability considerations). The hope, however, is that the ‘thin’ list of assumptions shall be sufficient to explain and predict ‘thick’ phenomena in the real, complex, world.
In some (textbook) model depictions we are essentially given the following structure,
A1, A2, … An
where a set of undifferentiated assumptions are used to infer a theorem.
This is, however, to vague and imprecise to be helpful, and does not give a true picture of the usual mainstream modeling strategy, where there’s a differentiation between a set of law-like hypotheses (CA) and a set of auxiliary assumptions (AA), giving the more adequate structure
CA1, CA2, … CAn & AA1, AA2, … AAn
CA1, CA2, … CAn
(AA1, AA2, … AAn) → Theorem,
more clearly underlining the function of AA as a set of (empirical, spatio-temporal) restrictions on the applicability of the deduced theorems.
This underlines the fact that specification of AA restricts the range of applicability of the deduced theorem. In the extreme cases we get
CA1, CA2, … CAn
where the deduced theorems are analytical entities with universal and totally unrestricted applicability, or
AA1, AA2, … AAn
where the deduced theorem is transformed into an untestable tautological thought-experiment without any empirical commitment whatsoever beyond telling a coherent fictitious as-if story.
Not clearly differentiating between CA and AA means that we can’t make this all-important interpretative distinction and opens up for unwarrantedly ‘saving’ or ‘immunizing’ models from almost any kind of critique by simple equivocation between interpreting models as empirically empty and purely deductive-axiomatic analytical systems, or, respectively, as models with explicit empirical aspirations. Flexibility is usually something people deem positive, but in this methodological context it’s more troublesome than a sign of real strength. Models that are compatible with everything, or come with unspecified domains of application, are worthless from a scientific point of view.
Economics — in contradistinction to logic and mathematics — ought to be an empirical science, and empirical testing of ‘axioms’ ought to be self-evidently relevant for such a discipline. For although the mainstream economist himself (implicitly) claims that his axiom is universally accepted as true and in no need of proof, that is in no way a justified reason for the rest of us to simpliciter accept the claim.
When applying deductivist thinking to economics, mainstream (neoclassical) economists usually set up ‘as if’ models based on the logic of idealization and a set of tight axiomatic assumptions from which consistent and precise inferences are made. The beauty of this procedure is of course that if the axiomatic premises are true, the conclusions necessarily follow. But — although the procedure is a marvellous tool in mathematics and axiomatic-deductivist systems, it is a poor guide for real-world systems. As Hans Albert has it on the neoclassical style of thought:
Science progresses through the gradual elimination of errors from a large offering of rivalling ideas, the truth of which no one can know from the outset. The question of which of the many theoretical schemes will finally prove to be especially productive and will be maintained after empirical investigation cannot be decided a priori. Yet to be useful at all, it is necessary that they are initially formulated so as to be subject to the risk of being revealed as errors. Thus one cannot attempt to preserve them from failure at every price. A theory is scientifically relevant first of all because of its possible explanatory power, its performance, which is coupled with its informational content …
Clearly, it is possible to interpret the ‘presuppositions’ of a theoretical system … not as hypotheses, but simply as limitations to the area of application of the system in question. Since a relationship to reality is usually ensured by the language used in economic statements, in this case the impression is generated that a content-laden statement about reality is being made, although the system is fully immunized and thus without content. In my view that is often a source of self-deception in pure economic thought …
Most mainstream economic models are abstract, unrealistic and presenting mostly non-testable hypotheses. How then are they supposed to tell us anything about the world we live in?
Confronted with the massive empirical failures of their models and theories, mainstream economists often retreat into looking upon their models and theories as some kind of ‘conceptual exploration,’ and give up any hopes whatsoever of relating their theories and models to the real world. Instead of trying to bridge the gap between models and the world, one decides to look the other way.
To me this kind of scientific defeatism is equivalent to surrendering our search for understanding the world we live in. It can’t be enough to prove or deduce things in a model world. If theories and models do not directly or indirectly tell us anything of the world we live in – then why should we waste any of our precious time on them?
The way axioms and theorems are formulated in mainstream (neoclassical) economics standardly leaves their specification without almost any restrictions whatsoever, safely making every imaginable evidence compatible with the all-embracing ‘theory’ — and a theory without informational content never risks being empirically tested and found falsified. Used in mainstream economics ‘thought experimental’ activities, it may of course be very ‘handy’, but totally void of any empirical value.
Mainstream economic models are nothing but broken pieces models. That kind of models can’t make economics a science.