Endogeneity bias — fiction in a fictitious world (wonkish) The bivariate model base and its a priori closure destines ‘endogeneity bias’ to a fictitious existence. That existence, in turn, confines applied research in a fictitious world. The concept loses its grip in empirical studies whose findings rely heavily on forecasting accuracy, e.g. a wide range of macro-modelling research as mentioned before. It remains thriving in areas where empirical results are evaluated virtually solely by the statistical significances of estimates of one or two predestined structural parameters in models offering highly partial causal explanations of the data at hand. These models are usually presented to serve the practical purpose of policy evaluation. Since conclusive
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes What statistics teachers get wrong!
Lars Pålsson Syll writes Statistical uncertainty
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Lars Pålsson Syll writes Interpreting confidence intervals
Endogeneity bias — fiction in a fictitious world (wonkish)
The bivariate model base and its a priori closure destines ‘endogeneity bias’ to a fictitious existence. That existence, in turn, confines applied research in a fictitious world. The concept loses its grip in empirical studies whose findings rely heavily on forecasting accuracy, e.g. a wide range of macro-modelling research as mentioned before. It remains thriving in areas where empirical results are evaluated virtually solely by the statistical significances of estimates of one or two predestined structural parameters in models offering highly partial causal explanations of the data at hand. These models are usually presented to serve the practical purpose of policy evaluation. Since conclusive empirical evidence is hard to come by for policies implemented in uncontrolled environments, making a good story becomes the essential goal … Use of consistent estimators actually enhances the persuasive power of the story by helping maintain the unfalsifiable status of the models …
From a discipline perspective, although belief in endogeneity bias has worked in favour of research topics where empirical findings are relatively hard to falsify, knowledge gain from data there is often dismally low, especially in studies working with large data samples … Econometric practice that disregards data knowledge in model design and camouflages deficiencies in model design by estimators which effectively modify key causal variables in non-causal ways against what was originally intended in theory, can only be called ‘alchemy’, not ‘science’.
A great article that really underscores that econometrics is basically a deductive method. Given the assumptions (such as manipulability, transitivity, Reichenbach probability principles, separability, additivity, linearity etc) it delivers deductive inferences. The problem, of course, is that we will never completely know when the assumptions are right. Real target systems are seldom epistemically isomorphic to axiomatic-deductive models/systems, and even if they were, we still have to argue for the external validity of the conclusions reached from within these epistemically convenient models/systems. Causal evidence generated by statistical/econometric procedures may be valid in ‘closed’ models, but what we usually are interested in, is causal evidence in the real target system we happen to live in.
Advocates of econometrics want to have deductively automated answers to fundamental causal questions. But to apply ‘thin’ methods we have to have ‘thick’ background knowledge of what’s going on in the real world, and not in idealized models. Conclusions can only be as certain as their premises – and that also applies to the quest for causality in econometrics.