The trouble with econometrics In the process of translating a theory into implications about data, so many auxiliary assumptions are made that all contact with reality is lost. Any conflict can be resolved by adjusting the auxiliary assumptions. For example, suppose we want to learn if a production process satisfy diminishing, constant, or increasing returns to scale. The issue is of substantial significance from the point of view of theory. In carrying out the test, we assume a particular form of a production function, a particular way in which stochastic errors enter, and particular ways to aggregate and measure factors of production. The result of the test has no credibility, because we do not know what we are rejecting or accepting: the theory, or the auxiliary assumptions, or the ingenuity of the econometrician. Asad Zaman Another prominent trouble with econometrics is the way the so called error term is interpreted. Mostly it is seen to represent the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables.
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes What statistics teachers get wrong!
Lars Pålsson Syll writes Statistical uncertainty
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Lars Pålsson Syll writes Interpreting confidence intervals
The trouble with econometrics
In the process of translating a theory into implications about data, so many auxiliary assumptions are made that all contact with reality is lost. Any conflict can be resolved by adjusting the auxiliary assumptions. For example, suppose we want to learn if a production process satisfy diminishing, constant, or increasing returns to scale. The issue is of substantial significance from the point of view of theory. In carrying out the test, we assume a particular form of a production function, a particular way in which stochastic errors enter, and particular ways to aggregate and measure factors of production. The result of the test has no credibility, because we do not know what we are rejecting or accepting: the theory, or the auxiliary assumptions, or the ingenuity of the econometrician.
Another prominent trouble with econometrics is the way the so called error term is interpreted. Mostly it is seen to represent the effect of the variables that were omitted from the model. The error term is somehow thought to be a ‘cover-all’ term representing omitted content in the model and necessary to include to ‘save’ the assumed deterministic relation between the other random variables included in the model. Error terms are usually assumed to be orthogonal (uncorrelated) to the explanatory variables. But since they are unobservable, they are also impossible to empirically test. And without justification of the orthogonality assumption, there is as a rule nothing to ensure identifiability:
With enough math, an author can be confident that most readers will never figure out where a FWUTV (facts with unknown truth value) is buried. A discussant or referee cannot say that an identification assumption is not credible if they cannot figure out what it is and are too embarrassed to ask.
Distributional assumptions about error terms are a good place to bury things because hardly anyone pays attention to them. Moreover, if a critic does see that this is the identifying assumption, how can she win an argument about the true expected value the level of aether? If the author can make up an imaginary variable, “because I say so” seems like a pretty convincing answer to any question about its properties.