Biomedical, psychological, and social sciences are “soft” insofar as they focus on phenomena whose regularities are amorphous and situational (in contrast to the universal, exact laws which dominate physical sciences). In doing so, they must confront another major source of research uncertainty: Living organisms are characterized by natural variation and complex feedback within and across organisms, which introduces sampling variation or “noise” that we model as statistical uncertainty. The tremendous variability among and within living beings and systems makes it crucial to account for this uncertainty when designing and analyzing studies of those entities. Unfortunately, our desire for certainty leads to a tendency to approach statistics as if it could eliminate all
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Lars Pålsson Syll writes Interpreting confidence intervals
Lars Pålsson Syll writes What kind of ‘rigour’ do RCTs provide?
Lars Pålsson Syll writes Is the p-value dead?
Biomedical, psychological, and social sciences are “soft” insofar as they focus on phenomena whose regularities are amorphous and situational (in contrast to the universal, exact laws which dominate physical sciences). In doing so, they must confront another major source of research uncertainty:
Living organisms are characterized by natural variation and complex feedback within and across organisms, which introduces sampling variation or “noise” that we model as statistical uncertainty. The tremendous variability among and within living beings and systems makes it crucial to account for this uncertainty when designing and analyzing studies of those entities.Unfortunately, our desire for certainty leads to a tendency to approach statistics as if it could eliminate all uncertainty … Caught in this mindset, researchers identify findings with dichotomous labels (statistically significant vs. statistically insignificant) and treat hypothesis testing as if it could turn a continuous probabilistic phenomenon into a deterministic dichotomy, or at least one with high signal-to-noise ratio in well-controlled experiments (Goodman et al., 2016). This cognitive illusion is a major problem for soft sciences, where effect sizes are typically small, random variability is high, and nonrandom sources of variability—uncontrolled biases—must often be considered (Greenland, 2017). The result is a low ratio of true signal (effect) to random noise and bias, hence the low reliability of study results outside of a relatively few exceptionally large and expensive experimental studies …
[D]ata alone say nothing at all about a topic. Instead, “the data tell us. . .” is a misleading preface for a particular data interpretation. Every interpretation is laden with assumptions that can and often should be questioned, as when there are concerns about violations of experimental protocols, data integrity, or statistical assumptions.
We live in a world permeated by unmeasurable uncertainty — not quantifiable stochastic risk — which often forces us to make decisions based on anything but ‘rational expectations.’ Our expectations are most often based on the confidence or ‘weight’ we put on different events and alternatives, and the ‘degrees of belief’ on which we weigh probabilities often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by ‘modern’ social sciences. Often we simply do not know.
So why do economists, companies, and governments continue with the expensive but obviously rather worthless activity of trying to forecast/predict the future?
A couple of years ago yours truly was interviewed by a public radio journalist working on a series on Great Economic Thinkers. We were discussing the monumental failures of the predictions-and-forecasts-business. But — the journalist asked — if these overconfident economists with their ‘rigorous’ and ‘precise’ mathematical-statistical-econometric models are so wrong again and again — why do they persist in wasting time on it?
In a very personal discussion of uncertainty and the hopelessness of accurately modelling what happens in the real world, Nobel laureate Kenneth Arrow comes up with what is probably one of the most plausible reasons:
It is my view that most individuals underestimate the uncertainty of the world. This is almost as true of economists and other specialists as it is of the lay public. To me our knowledge of the way things work, in society or in nature, comes trailing clouds of vagueness … Experience during World War II as a weather forecaster added the news that the natural world as also unpredictable. An incident illustrates both uncertainty and the unwilling-ness to entertain it. Some of my colleagues had the responsi-bility of preparing long-range weather forecasts, i.e., for the following month. The statisticians among us subjected these forecasts to verification and found they differed in no way from chance. The forecasters themselves were convinced and requested that the forecasts be discontinued. The reply read approximately like this: ‘The Commanding General is well aware that the forecasts are no good. However, he needs them for planning purposes.’