Sunday , November 24 2024
Home / Lars P. Syll / Why most published research findings are false

Why most published research findings are false

Summary:
Why most published research findings are false Instead of chasing statistical significance, we should improve our understanding of the range of R values — the pre-study odds — where research efforts operate. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained … Large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test. Nevertheless, most new discoveries will continue to stem from hypothesis-generating​ research with low or very

Topics:
Lars Pålsson Syll considers the following as important:

This could be interesting, too:

Lars Pålsson Syll writes What statistics teachers get wrong!

Lars Pålsson Syll writes Statistical uncertainty

Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics

Lars Pålsson Syll writes Interpreting confidence intervals

Why most published research findings are false

Instead of chasing statistical significance, we should improve our understanding of the range of R values — the pre-study odds — where research efforts operate. Before running an experiment, investigators should consider what they believe the chances are that they are testing a true rather than a non-true relationship. Speculated high R values may sometimes then be ascertained … Large studies with minimal bias should be performed on research findings that are considered relatively established, to see how often they are indeed confirmed. I suspect several established “classics” will fail the test.

Why most published research findings are falseNevertheless, most new discoveries will continue to stem from hypothesis-generating​ research with low or very low pre-study odds. We should then acknowledge that statistical significance testing in the report of a single study gives only a partial picture, without knowing how much testing has been done outside the report and in the relevant field at large. Despite a large statistical literature for multiple testing corrections, usually it is impossible to decipher how much data dredging by the reporting authors or other research teams has preceded a reported research finding. Even if determining this were feasible, this would not inform us about the pre-study odds. Thus, it is unavoidable that one should make approximate assumptions on howmany relationships are expected to be true among those probed across the relevant research fields and research designs.​

John P. A. Ioannidis

Advertisements
Lars Pålsson Syll
Professor at Malmö University. Primary research interest - the philosophy, history and methodology of economics.

Leave a Reply

Your email address will not be published. Required fields are marked *