From Lars Syll Truth exists, and so does uncertainty. Uncertainty acknowledges the existence of an underlying truth: you cannot be uncertain of nothing: nothing is the complete absence of anything. You are uncertain of something, and if there is some thing, there must be truth. At the very least, it is that this thing exists. Probability, which is the science of uncertainty, therefore aims at truth. Probability presupposes truth; it is a measure or characterization of truth. Probability is not necessarily the quantification of the uncertainty of truth, because not all uncertainty is quantifiable. Probability explains the limitations of our knowledge of truth, it never denies it. Probability is purely epistemological, a matter solely of individual understanding. Probability does not
Topics:
Lars Pålsson Syll considers the following as important: Uncategorized
This could be interesting, too:
John Quiggin writes Trump’s dictatorship is a fait accompli
Peter Radford writes Election: Take Four
Merijn T. Knibbe writes Employment growth in Europe. Stark differences.
Merijn T. Knibbe writes In Greece, gross fixed investment still is at a pre-industrial level.
from Lars Syll
Truth exists, and so does uncertainty. Uncertainty acknowledges the existence of an underlying truth: you cannot be uncertain of nothing: nothing is the complete absence of anything. You are uncertain of something, and if there is some thing, there must be truth. At the very least, it is that this thing exists. Probability, which is the science of uncertainty, therefore aims at truth. Probability presupposes truth; it is a measure or characterization of truth. Probability is not necessarily the quantification of the uncertainty of truth, because not all uncertainty is quantifiable. Probability explains the limitations of our knowledge of truth, it never denies it. Probability is purely epistemological, a matter solely of individual understanding. Probability does not exist in things; it is not a substance. Without truth, there could be no probability.
William Briggs’ approach is — as he acknowledges in the preface of his interesting and thought-provoking book — “closely aligned to Keynes’s.”
Almost a hundred years after John Maynard Keynes wrote his seminal A Treatise on Probability (1921), it is still very difficult to find statistics textbooks that seriously try to incorporate his far-reaching and incisive analysis of induction and evidential weight.
The standard view in statistics — and the axiomatic probability theory underlying it — is to a large extent based on the rather simplistic idea that ‘more is better.’ But as Keynes argues – ‘more of the same’ is not what is important when making inductive inferences. It’s rather a question of ‘more but different.’
Variation, not replication, is at the core of induction. Finding that p(x|y) = p(x|y & w) doesn’t make w ‘irrelevant.’ Knowing that the probability is unchanged when w is present gives p(x|y & w) another evidential weight (‘weight of argument’). Running 10 replicative experiments do not make you as ‘sure’ of your inductions as when running 10 000 varied experiments – even if the probability values happen to be the same.
According to Keynes we live in a world permeated by unmeasurable uncertainty – not quantifiable stochastic risk – which often forces us to make decisions based on anything but ‘rational expectations.’ Keynes rather thinks that we base our expectations on the confidence or ‘weight’ we put on different events and alternatives. To Keynes, expectations are a question of weighing probabilities by ‘degrees of belief,’ beliefs that often have preciously little to do with the kind of stochastic probabilistic calculations made by the rational agents as modelled by “modern” social sciences. And often we ‘simply do not know.’ As Keynes writes in Treatise:
The kind of fundamental assumption about the character of material laws, on which scientists appear commonly to act, seems to me to be [that] the system of the material universe must consist of bodies … such that each of them exercises its own separate, independent, and invariable effect, a change of the total state being compounded of a number of separate changes each of which is solely due to a separate portion of the preceding state … In my judgment, the practical usefulness of those modes of inference … on which the boasted knowledge of modern science depends, can only exist … if the universe of phenomena does in fact present those peculiar characteristics of atomism and limited variety which appears more and more clearly as the ultimate result to which material science is tending.
Science according to Keynes should help us penetrate to “the true process of causation lying behind current events” and disclose “the causal forces behind the apparent facts.” Models can never be more than a starting point in that endeavour. He further argued that it was inadmissible to project history onto the future. Consequently, we cannot presuppose that what has worked before, will continue to do so in the future. That statistical models can get hold of correlations between different ‘variables’ is not enough. If they cannot get at the causal structure that generated the data, they are not really ‘identified.’
How strange that writers of statistics textbook, as a rule, do not even touch upon these aspects of scientific methodology that seems to be so fundamental and important for anyone trying to understand how we learn and orient ourselves in an uncertain world. An educated guess on why this is a fact would be that Keynes concepts are not possible to squeeze into a single calculable numerical ‘probability.’ In the quest for quantities one puts a blind eye to qualities and looks the other way – but Keynes ideas keep creeping out from under the statistics carpet.
It’s high time that statistics textbooks give Keynes his due.