The difference between rate and probability (wonkish) Suppose there is a series of Bernoulli trials, that each trial has the same probability p of success, and that the trials are independent—like the standard model of coin tossing, treating ‘heads’ as ‘success.’ Then the Law of Large Numbers guarantees that the rate of successes converges (in probability) to the probability of success. If a sequence of trials is random and the chance of success is the same in each trial, then the empirical rate of success is an unbiased estimate of the underlying chance of success. If the trials are random and they have the same chance of success and you know the dependence structure of the trials (for example, if the trials are independent), then you can quantify the
Topics:
Lars Pålsson Syll considers the following as important: Statistics & Econometrics
This could be interesting, too:
Lars Pålsson Syll writes What statistics teachers get wrong!
Lars Pålsson Syll writes Statistical uncertainty
Lars Pålsson Syll writes The dangers of using pernicious fictions in statistics
Lars Pålsson Syll writes Interpreting confidence intervals
The difference between rate and probability (wonkish)
Suppose there is a series of Bernoulli trials, that each trial has the same probability p of success, and that the trials are independent—like the standard model of coin tossing, treating ‘heads’ as ‘success.’ Then the Law of Large Numbers guarantees that the rate of successes converges (in probability) to the probability of success.
If a sequence of trials is random and the chance of success is the same in each trial, then the empirical rate of success is an unbiased estimate of the underlying chance of success. If the trials are random and they have the same chance of success and you know the dependence structure of the trials (for example, if the trials are independent), then you can quantify the uncertainty of that estimate of the underlying chance of success. But the mere fact that something has a rate does not mean that it is the result of a random process.
For example, suppose a sequence of heads and tails results from a series of random, independent tosses of an ideal fair coin. Then the rate of heads will converge (in probability) to one half. But suppose I give you the sequence ‘heads, tails, heads, tails, heads, tails, heads, tails, heads, tails, …’ ad infinitum. The limiting rate of heads is 1/2. While that sequence could be the result of a sequence of fair random tosses, it is implausible, and it certainly need not be. Sequences of outcomes are not necessarily the result of anything random, and rates are not necessarily (estimates of) probabilities.