Friday , March 29 2024
Home / Tag Archives: Statistics & Econometrics (page 3)

Tag Archives: Statistics & Econometrics

Improving econometric analysis

Always, but always, plot your data. Remember that data quality is at least as important as data quantity. Always ask yourself, “Do these results make economic/common sense”? Check whether your “statistically significant” results are also “numerically/economically significant”. Be sure that you know exactly what assumptions are used/needed to obtain the results relating to the properties of any estimator or test that you use. Just because someone else has used a particular...

Read More »

Guido Imbens on the response to LATE 

.[embedded content] Many economists — yours truly included — are highly sceptical of the ability of mainstream economics to deliver useful models. Some of us even question the ‘modern’ insistence on modelling — “if it’s not in a model, it’s not economics.” Even if we accept the limitation of only being able to say something about (some kind of) average treatment effects when using instrumental-variables designs, a significant and major problem is that researchers who use these...

Read More »

Manipulability — Pearl vs Rubin (wonkish)

Manipulability — Pearl vs Rubin (wonkish) Pearl asserts, while some RCM (Rubin Causal Models) theorists deny, that so-called “non-manipulable” variables can be causes (Pearl 2019; Holland 1986, 2008). Race and gender, which arguably cannot be experimentally manipulated, are key examples of such variables … My response is that although advocates of the frameworks adopt conflicting positions regarding certain variables, these positions are not forced upon...

Read More »

Ignorability — a questionable assumption

Ignorability — a questionable assumption Researchers adhering to missing data analysis invariably invoke an ad-hoc assumption called “conditional ignorability,” often decorated as “ignorable treatment assignment mechanism”, which is far from being “well understood” by those who make it, let alone those who need to judge its plausibility. For readers versed in graphical modeling, “conditional ignorability” is none other than the back-door criterion that...

Read More »

The dangers of too much control

The dangers of too much control You see it all the time in studies. “We controlled for…” And then the list starts … The more things you can control for, the stronger your study is — or, at least, the stronger your study seems. Controls give the feeling of specificity, of precision. But sometimes, you can control for too much. Sometimes you end up controlling for the thing you’re trying to measure … An example is research around the gender wage gap, which...

Read More »

Causal traps of statistics

Causal traps of statistics .[embedded content] Statistical reasoning certainly seems paradoxical to most people. Take for example Simpson’s paradox. From a theoretical perspective, it importantly shows that causality can never be reduced to a question of statistics or probabilities unless you are — miraculously — able to keep constant all other factors that influence the probability of the outcome studied. To understand causality we always have to relate it...

Read More »

The fundamental econometric dilemma

The fundamental econometric dilemma There is one point, to which in practice I attach a great importance, you do not allude to. In many of these statistical researches, in order to get enough observations they have to be scattered over a lengthy period of time; and for a lengthy period of time it very seldom remains true that the environment is sufficiently stable. That is the dilemma of many of these enquiries, which they do not seem to me to face....

Read More »

The LATE approach — a critique

The LATE approach — a critique One of the reasons Guido Imbens and Joshua Angrist won the 2021 ‘Nobel prize’ in economics is their LATE approach used especially in instrumental variables estimation of causal effects. Another prominent ‘Nobel prize’ winner in economics — Angus Deaton — is not overly impressed: Without explicit prior consideration of the effect of the instrument choice on the parameter being estimated, such a procedure is effectively the...

Read More »

Natural ‘natural experiments’

Evidently, however, the potential for the strictly natural natural experimental approach, which relies exclusively on natural events as instruments, is constrained by the small number of random events provided by nature and by the fact that most outcomes of interest are the result of many factors associated with preferences, technologies, and markets. And the prospect of the discovery of new and useful natural events is limited … It is clear that the number of natural...

Read More »

Instrumental Variables — The Good and the Bad

Instrumental Variables — The Good and the Bad .[embedded content] Making appropriate extrapolations from (ideal, natural or quasi) experiments to different settings, populations or target systems, is not easy. “It works there” is no evidence for “it will work here.” The causal background assumptions made have to be justified, and without licenses to export, the value of ‘rigorous’ and ‘precise’ methods used when analyzing ‘natural experiments’ is often...

Read More »