Summary:
As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed. We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is
Topics:
Mike Norman considers the following as important: modeling, philosophy of science, scientific method, statistical significance, statistics
This could be interesting, too:
As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed. We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is
Topics:
Mike Norman considers the following as important: modeling, philosophy of science, scientific method, statistical significance, statistics
This could be interesting, too:
Joel Eissenberg writes Trusting statistics
Jeff Mosenkis (IPA) writes IPA’s weekly links
James Kwak writes COVID-19: The Statistics of Social Distancing
James Kwak writes COVID-19: The Statistics of Social Distancing
As shown over and over again when significance tests are applied, people have a tendency to read ‘not disconfirmed’ as ‘probably confirmed.’ Standard scientific methodology tells us that when there is only say a 10 % probability that pure sampling error could account for the observed difference between the data and the null hypothesis, it would be more ‘reasonable’ to conclude that we have a case of disconfirmation. Especially if we perform many independent tests of our hypothesis and they all give about the same 10 % result as our reported one, I guess most researchers would count the hypothesis as even more disconfirmed.
We should never forget that the underlying parameters we use when performing significance tests are model constructions. Our p-values mean nothing if the model is wrong. And most importantly — statistical significance tests DO NOT validate models!Lars P. Syll’s Blog
Time to abandon statistical significance
Lars P. Syll | Professor, Malmo University