Monday , May 13 2024
Home / Lars P. Syll / The significance of insignificance (wonkish)

The significance of insignificance (wonkish)

Summary:
The significance of insignificance (wonkish) Skewness, outliers, multiple hypothesis testing, and reliance on asymptotics often — as is well-known among most statisticians — give rise to spurious findings. But the problem may actually be even worse than we have thought. In an interesting new paper, Alwyn Young, after having looked at over 2000 regressions reported in American Economic Association journals, summarizes his findings: Armed with an idea and a data set, we search for statistically significant relations, examining the relationship between dependent and independent variables that are of interest to us. Having found a significant relation, we then work energetically to convince seminar participants, referees and editors that it is robust, adding more and more right-hand side variables and employing universal “corrections” to deal with unknown problems with the error disturbance. This paper suggests that this dialogue between our roles as authors and our roles as sceptical readers may be misdirected. Correlations between dependent and independent variables may reflect the role of omitted variables, but they may also be the result of completely random correlation. This is unlikely to be revealed by adding additional non-random right-hand side variables.

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes Monte Carlo simulation explained (student stuff)

Lars Pålsson Syll writes The importance of ‘causal spread’

Lars Pålsson Syll writes Applied econometrics — a messy business

Lars Pålsson Syll writes Feynman’s trick (student stuff)

The significance of insignificance (wonkish)

The significance of insignificance (wonkish)Skewness, outliers, multiple hypothesis testing, and reliance on asymptotics often — as is well-known among most statisticians — give rise to spurious findings. But the problem may actually be even worse than we have thought.

In an interesting new paper, Alwyn Young, after having looked at over 2000 regressions reported in American Economic Association journals, summarizes his findings:

Armed with an idea and a data set, we search for statistically significant relations, examining the relationship between dependent and independent variables that are of interest to us. Having found a significant relation, we then work energetically to convince seminar participants, referees and editors that it is robust, adding more and more right-hand side variables and employing universal “corrections” to deal with unknown problems with the error disturbance. This paper suggests that this dialogue between our roles as authors and our roles as sceptical readers may be misdirected. Correlations between dependent and independent variables may reflect the role of omitted variables, but they may also be the result of completely random correlation. This is unlikely to be revealed by adding additional non-random right-hand side variables. Moreover, the high maximal leverage produced by these conditioning relations, combined with the use of leverage dependent asymptotic standard error corrections, produces a systematic bias in favour of finding significant results in finite samples. A much better indication of random correlation is the number of attempted insignificant specifications that accompanied the finding of a significant result. A large number of statistically independent insignificant results contain much more information than a sequence of correlated variations on a limited number of significant specifications. This fact is lost in our professional dialogue, with its focus on testing the robustness of significant relations …

A lack of statistically significant results is typically seen as a barrier to publication, but, as the aforementioned paper indicates, this need not be the case. To an economist reading these papers it seems prima facie obvious that the manipulations and treatments presented therein should have a substantial effect on participants. The fact that in so many cases there do not appear to be any (at least) statistically significant effects is, in many respects, much more stimulating than the confirmation of pre-existing beliefs. A greater emphasis on statistically insignificant results, both in the evaluation of evidence and in the consideration of the value of papers, might be beneficial.

The inevitable conclusion is that we should not take reported significance tests and inferences in economics at face value. Young’s results tell us instead to have a very cautious attitude towards published statistical inferences and significance tests. Statistical and econometric procedures and practices used in economics need strong revisions if they are going to be able to give us reliable results.

Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *