Objective (or physical) probability is the kind of probability that can be directly supported or refuted by relative frequencies. The kind used in statements such as:
The probability of rolling a seven with two dice is 1/6.
If both parent pea plants have yellow-green alleles for seed color, the probability that a child plant has yellow peas is 3/4.
The probability that a free neutron decays within 611 seconds is 1/2.
There’s a 95% probability that between 57 and 63 percent of 1,000 randomly selected adult Americans support legalizing marijuana.
Thus the first statement can be tested by rolling the dice 90 times and seeing if you get about 15 sixes.
And the last statement can be tested by conducting a hundreds of random polls of 1,000 Americans and seeing whether, in about 95 percent of them, between 570 and 630 of those polled support legalizing marijuana.
Epistemic (or evidential) probability is the kind of probability relative to evidence. The kind used in statements such as:
The Tunguska Event of 1908 was most likely caused by a meteor.
The probability of rain tomorrow is 90 percent.
It’s very likely the defendant is guilty based on the evidence presented at trial.
It’s more probable than not that smoking causes lung cancer.
There’s a 95% probability that between 57 and 63 percent of Americans support legalizing marijuana
Thus, for the first statement, the evidence includes not only evidence for the explosion of a meteor but evidence against competing hypotheses such as a natural gas explosion, a small nuclear blast, and the explosion of a micro black hole. There’s one Tungusga Event. So Relative frequencies are irrelevant
And for the last statement, the evidence consists of the results of any number of random polls. But there’s a difference between (i) a 95% probability that between 57 and 63 percent of 1,000 people polled support legalizing marijuana and (ii) a 95% probability that between 57 and 63 percent of Americans support legalizing marijuana. We can confirm statement (i) by conducting a hundred polls and seeing if, in about 95 of them, between 57 and 63 percent support legalizing marijuana. But statement (ii) can’t be confirmed the in the same way, by seeing whether, in about 95 of a hundred American populations, 57 and 63 percent of a given population support legalizing marijuana. There’s only one American population.
Classical statisticians use objective probabilities.
Bayesians use epistemic probabilities.
Difference between Bayesian and Classical (Frequentist) Statistics in One Example
In a random poll of 1,000 Americans, 600 approved of the president’s job performance.
Using Bayes Theorem, a Bayesian infers that
There’s a 95 percent probability that 60 percent of Americans approve of the president’s performance, with a margin of error of ±3 percentage points.
A classical statistician regards the Bayesian’s conclusion as statistically improper (at best) or meaningless (at worst). The frequentist argument:
A proper statistical probability must be directly testable by repeated experiments.
The statement of a population parameter can’t be tested in this way, since (unlike samples) there’s only one population.
Therefore, the Bayesian’s statement is statistically improper.
Classical statisticians express their analysis of a poll using the technical terms confidence interval and confidence level, introduced by Jerzy Neyman in 1937:
60 percent of Americans approve of the president’s job performance, with a confidence interval of ±3 percentage points at the 95% confidence level.
Eliminating the jargon, the classical statistician’s statement amounts to this:
Given that 60 percent of Americans approve of the president’s performance, there’s a 95 percent probability that in a random sample of 1000 Americans 60 percent will approve of the president’s performance, with a margin of error of ±3 percentage points.
For classical statisticians, probabilities are only about samples drawn from the population. For Bayesians, probabilities are about the population itself.
Estimation
Population Parameters:
For Bayesians, a population parameter is a random variable, computable by Bayes Theorem.
For classical statisticians, a statistical probability must be testable by repeated experiments. Samples can be repeated, but not populations.