Economics journals — publishing lazy non-scientific work In a new paper, Andrew Chang, an economist at the Federal Reserve and Phillip Li, an economist with the Office of the Comptroller of the Currency, describe their attempt to replicate 67 papers from 13 well-regarded economics journals … Their results? Just under half, 29 out of the remaining 59, of the papers could be qualitatively replicated (that is to say, their general findings held up, even if the authors did not arrive at the exact same quantitative result). For the other half whose results could not be replicated, the most common reason was “missing public data or code” … H.D. Vinod, an economics professor at Fordham University … noted that … caution could be outweighed by the sheer amount of work it takes to clean up data files in order to make them reproducible. “It’s human laziness,” he said. “There’s all this work involved in getting the data together” … Bruce McCullough, said he thought the authors’ definition of what counted as replication – achieving the same qualitative, as opposed to quantitative, results – was far too generous. If a paper’s conclusions are correct, he argues, one should be able to arrive at the same numbers using the same data. “What these journals produce is not science,” he said. “People should treat the numerical results as if they were produced by a random number generator.
Topics:
Lars Pålsson Syll considers the following as important: Economics
This could be interesting, too:
Lars Pålsson Syll writes Daniel Waldenströms rappakalja om ojämlikheten
Peter Radford writes AJR, Nobel, and prompt engineering
Lars Pålsson Syll writes MMT explained
Lars Pålsson Syll writes Statens finanser funkar inte som du tror
Economics journals — publishing lazy non-scientific work
In a new paper, Andrew Chang, an economist at the Federal Reserve and Phillip Li, an economist with the Office of the Comptroller of the Currency, describe their attempt to replicate 67 papers from 13 well-regarded economics journals …
Their results? Just under half, 29 out of the remaining 59, of the papers could be qualitatively replicated (that is to say, their general findings held up, even if the authors did not arrive at the exact same quantitative result). For the other half whose results could not be replicated, the most common reason was “missing public data or code” …
H.D. Vinod, an economics professor at Fordham University … noted that … caution could be outweighed by the sheer amount of work it takes to clean up data files in order to make them reproducible.
“It’s human laziness,” he said. “There’s all this work involved in getting the data together” …
Bruce McCullough, said he thought the authors’ definition of what counted as replication – achieving the same qualitative, as opposed to quantitative, results – was far too generous. If a paper’s conclusions are correct, he argues, one should be able to arrive at the same numbers using the same data.
“What these journals produce is not science,” he said. “People should treat the numerical results as if they were produced by a random number generator.”