In other words, if a decision-maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd. So leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved. To get the Bayesian probability calculus going you sometimes have to assume strange things — so strange that you should perhaps start wondering if maybe there is something wrong with your theory … Added: For those interested in these questions concerning the reach and application of statistical theories, do read Sander Greenland’s insightful comment: My take is
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes What statistics teachers get wrong!
Lars Pålsson Syll writes Statistical uncertainty
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Lars Pålsson Syll writes Interpreting confidence intervals
In other words, if a decision-maker thinks something cannot be true and interprets this to mean it has zero probability, he will never be influenced by any data, which is surely absurd.
So leave a little probability for the moon being made of green cheese; it can be as small as 1 in a million, but have it there since otherwise an army of astronauts returning with samples of the said cheese will leave you unmoved.
To get the Bayesian probability calculus going you sometimes have to assume strange things — so strange that you should perhaps start wondering if maybe there is something wrong with your theory …
Added: For those interested in these questions concerning the reach and application of statistical theories, do read Sander Greenland’s insightful comment:
My take is that the quoted passage is a poster child for what’s wrong with statistical foundations for applications. Mathematics only provides contextually void templates for what might be theories if some sensible mapping can be found between the math and the application context. Just as with frequentist and all other statistical “theories”, Bayesian mathematical theory (template) works fine as a tool when the problem can be defined in a very small world of an application in which the axioms make contextual sense under the mapping and the background information is not questioned. There is no need for leaving any probability on “green cheese” if you aren’t using Bayes as a philosophy, for if green cheese is really found, the entire contextual knowledge base is undermined and all well-informed statistical analyses sink with it.
The problems often pointed out for econometrics are general ones of statistical theories, which can quickly degenerate into math gaming and are usually misrepresented as scientific theories about the world. Of course, with a professional sales job to do, statistics has encouraged such reification through use of deceptive labels like “significance”, “confidence”, “power”, “severity” etc. for what are only properties of objects in mathematical spaces (much like identifying social group dynamics with algebraic group theory or crop fields with vector field theory). Those stat-theory objects require extraordinary physical control of unit selection and experimental conditions to even begin to connect to the real-world meaning of those conventional labels. Such tight controls are often possible with inanimate materials (although even then they can cost billions of dollars to achieve, as with large particle colliders). But they are infrequently possible with humans, and I’ve never seen them approached when whole societies are the real-world target, as in macroeconomics, sociology, and social medicine. In those settings, at best our analyses only provide educated guesses about what will happen as a consequence of our decisions.