A philosopher’s look at science You will already be familiar with the fact that broad swathes of social science research are given over to establishing, analysing, generalising, theorising about and using statistical associations that are manipulated with the assumptions of probability theory. This makes sense if probabilities can be attached to broad swathes of the phenomena that social science is meant to deal with. But can they? Here we face the same issue that you will meet when I discuss the assumption of universal determinism: is the social world really that orderly? Perhaps it is my failure to see the forest for the trees, but when I look at various studies across the social sciences, from psychology, sociology and political science to economics
Topics:
Lars Pålsson Syll considers the following as important: Theory of Science & Methodology
This could be interesting, too:
Lars Pålsson Syll writes Kausalitet — en crash course
Lars Pålsson Syll writes Randomization and causal claims
Lars Pålsson Syll writes Race and sex as causes
Lars Pålsson Syll writes Randomization — a philosophical device gone astray
A philosopher’s look at science
You will already be familiar with the fact that broad swathes of social science research are given over to establishing, analysing, generalising, theorising about and using statistical associations that are manipulated with the assumptions of probability theory.
This makes sense if probabilities can be attached to broad swathes of the phenomena that social science is meant to deal with. But can they? Here we face the same issue that you will meet when I discuss the assumption of universal determinism: is the social world really that orderly? Perhaps it is my failure to see the forest for the trees, but when I look at various studies across the social sciences, from psychology, sociology and political science to economics and public health, I often cannot see grounds for this assumption, I sometimes see good evidence against it and I also see places where it seems to be leading us astray, with respect both to the accumulation and the use of knowledge.
To understand ‘non-routine’ decisions and unforeseeable changes in behaviour, ergodic probability distributions are of no avail. In a world full of genuine uncertainty — where real historical time rules the roost — the probabilities that ruled the past are not those that will rule the future.
Time is what prevents everything from happening at once. To simply assume that economic processes are ergodic and concentrate on ensemble averages — and a fortiori in any relevant sense timeless — is not a sensible way of dealing with the kind of genuine uncertainty that permeates open systems such as economies.
When you assume the economic processes to be ergodic, ensemble and time averages are identical. Let me give an example: Assume we have a market with an asset priced at 100 €. Then imagine the price first goes up by 50% and then later falls by 50%. The ensemble average for this asset would be 100 €- because we here envision two parallel universes (markets) where the asset price falls in one universe (market) with 50% to 50 €, and in another universe (market) it goes up with 50% to 150 €, giving an average of 100 € ((150+50)/2). The time average for this asset would be 75 € — because we here envision one universe (market) where the asset price first rises by 50% to 150 €, and then falls by 50% to 75 € (0.5*150).
From the ensemble perspective nothing really, on average, happens. From the time perspective lots of things really, on average, happen.
Assuming ergodicity there would have been no difference at all. What is important with the fact that real social and economic processes are nonergodic is the fact that uncertainty — not calculable risk — rules the roost. That was something both Keynes and Knight basically said in their 1921 books. Thinking about uncertainty in terms of ‘rational expectations’ and ‘ensemble averages’ has had seriously bad repercussions on the financial system.
Knight’s uncertainty concept has an epistemological founding and Keynes’ definitely an ontological founding. Of course, this also has repercussions on the issue of ergodicity in a strict methodological and mathematical-statistical sense. I think Keynes’ view is the most warranted of the two.
The most interesting and far-reaching difference between the epistemological and the ontological view is that if one subscribes to the former, Knightian view, one opens up to the mistaken belief that with better information and greater computer power, we somehow should always be able to calculate probabilities and describe the world as an ergodic universe. As Keynes convincingly argued, that is ontologically just not possible.
To Keynes, the source of uncertainty was in the nature of the real — nonergodic — world. It had to do, not only — or primarily — with the epistemological fact of us not knowing the things that today are unknown, but rather with the much deeper and far-reaching ontological fact that there often is no firm basis on which we can form quantifiable probabilities and expectations at all.
Sometimes we do not know because we cannot know. Using models based on unsubstantiated beliefs in the existence of probabilities when in fact there are none, is one of the main reasons for the severe shortcomings of mainstream economics.