From my 2009 paper with Weakliem: Throughout, we use the term statistically significant in the conventional way, to mean that an estimate is at least two standard errors away from some “null hypothesis” or prespecified value that would indicate no effect present. An estimate is statistically insignificant if the observed value could reasonably be explained by simple chance variation, much in the way that a sequence of 20 coin tosses might happen to come up 8 heads and 12 tails; we would say...
Read More »IPA’s weekly links
Guest post by Jeff Mosenkis of Innovations for Poverty Action. The links are back from vacation. We may have a few back links to catch up on over the next weeks, so here we go: Rachel Meager has public speaking tips for economists. If you want to catch up on a Twitter conversation including me, Chris, and a bunch of other people responding to the Cuddy article on what replication fights in psych mean for econ there’s a 168-slide storify here. I wondered if econ is happily driving...
Read More »IPA’s weekly links
Guest post by Jeff Mosenkis of Innovations for Poverty Action. The links are back from vacation. We may have a few back links to catch up on over the next weeks, so here we go: Rachel Meager has public speaking tips for economists. If you want to catch up on a Twitter conversation including me, Chris, and a bunch of other people responding to the Cuddy article on what replication fights in psych mean for econ there’s a 168-slide storify here. I wondered if econ is happily driving along at...
Read More »IPA’s weekly links
Guest post by Jeff Mosenkis of Innovations for Poverty Action. The links are back from vacation. We may have a few back links to catch up on over the next weeks, so here we go: Rachel Meager has public speaking tips for economists. If you want to catch up on a Twitter conversation including me, Chris, and a bunch of other people responding to the Cuddy article on what replication fights in psych mean for econ there’s a 168-slide storify here. I wondered if econ is happily driving along at...
Read More »Andrew Gelman — When considering proposals for redefining or abandoning statistical significance, remember that their effects on science will only be indirect!
Summary: The end-in-view is doing good science and avoiding junk science, which is proliferating. Adjusting standards, etc. are only means to an end. There are no silver bullets or magic wands. Doing good science depends on good design, accurate measurement, and replication. Statistical Modeling, Causal Inference, and Social ScienceWhen considering proposals for redefining or abandoning statistical significance, remember that their effects on science will only be indirect! Andrew Gelman |...
Read More »Lars P. Syll — Time to abandon statistical significance
As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they...
Read More »Abandon Statistical Significance — Blakeley B. McShane, David Gal, Andrew Gelman, Christian Robert, and Jennifer L. Tacket
AbstractIn science publishing and many areas of research, the status quo is a lexicographic decision rule in which any result is first required to have a p-value that surpasses the 0.05 threshold and only then is consideration—often scant—given to such factors as prior and related evidence, plausibility of mechanism, study design and data quality, real world costs and benefits, novelty of finding, and other factors that vary by research domain. There have been recent proposals to change the...
Read More »Andrew Gelman — Rosenbaum (1999): Choice as an Alternative to Control in Observational Studies
Paul Rosenbaum’s 1999 paper “Choice as an Alternative to Control in Observational Studies” is really thoughtful and well-written. The comments and rejoinder include an interesting exchange between Manski and Rosenbaum on external validity and the role of theories.... Important in the most studies in social science, including economics, are necessarily observational rather than experimental. The question is how to design observational studies to make them as close as possible to experimental...
Read More »Andrew Gelman — Publish your raw data and your speculations, then let other people do the analysis: track and field edition
There seems to be an expectation in science that the people who gather a dataset should also be the ones who analyze it. But often that doesn’t make sense: what it takes to gather relevant data has little to do with what it takes to perform a reasonable analysis. Indeed, the imperatives of analysis can even impede data-gathering, if people have confused ideas of what they can and can’t do with their data. I’d like us to move to a world in which gathering and analysis of data are separated,...
Read More »Noah Smith — “Theory vs. Data” in statistics too
Important. I think Noah has this right. Fit the tool to the job, rather than the job to the tool. Aristotle defined speculative knowledge in terms of causal explanation. This definition stuck although Aristotle's analysis of causality did not. In the Posterior Analytics, Aristotle places the following crucial condition on proper knowledge: we think we have knowledge of a thing only when we have grasped its cause (APost. 71 b 9–11. Cf. APost. 94 a 20). That proper knowledge is knowledge...
Read More »