Recently, my relationship with the concept of probability has been analogous to what a lot of people go through with religion. At first, I thought I perfectly understood probability as it was first presented to me; but then, I had an internal crisis, when I started to question if the notion of probability was even logically *coherent*; and finally, upon reflection, now I think I’ve clarified a few things that I’d like to share with you. Let me explain.

**A. What does probability even mean anymore?**

I’ll start off with a standard account of probability, as I and many of you learned in school. In math: P(E) = N_{E}/N_{T}. In words: the probability of an event E can be approximated by the number of times event E occurred (N_{E}) out of a large number of repeated trials (N_{T}), preferably as N_{T} approaches infinity. The probability of event E is used to assess the likelihood of event E occurring the next time I conduct a trial. For example, I could make the statement, “The probability of me rolling a 1 on a 6-sided die is ⅙,” and this would totally make sense given this definition. If I roll a die enough times, assuming the die is fair, then I’ll end up rolling a 1 once every six times, which defines the probability of ⅙. So the next time I roll a die, I can use ⅙ as my estimate for how likely I am to roll a 1. Easy enough.

But I think statements like this constitute only a small fraction of the total number of statements we make that involve the concept of probability. For example, political forecasters make statements like this all the time: “The probability of the UK leaving the European Union, or ‘Brexit,’ by the end of the week is 40%.” Let’s see if we can apply the above definition of probability to this statement. Event E is easy to define; it’s the UK leaving the EU by the end of the week. But if I want to know how many times event E occurs over a large number of trials, I’m in big trouble. In the die-rolling case, I know what constitutes a “trial”; it’s every time I roll a die. But what’s a trial in the Brexit case? Every time this week ends? This is obviously problematic, since we don’t yet have even one instance where this week has ended. I could say a trial is every time any week has ended since the creation of the EU, but this is also a problem, since I want each trial to have the same probability distribution of outcomes as the next. Presumably the probability of the UK leaving the EU was way different a year ago than by the end of this week. Without having even the theoretical possibility of repeating a large number of trials, can I even use the concept of probability? Does the Brexit probability statement make any logical sense?

Things get even trickier when you realize that people make statements of probability that aren’t even about future events. For example, take this statement: “The probability that intelligent life exists outside of Earth is 30%.” For the die-rolling case, the probability of ⅙ defines how likely it is for me to roll a 1 the next time I roll a die. Here, the probability of 30% defines how likely it is that intelligent life exists outside of the Earth the next time…what, exactly? There is no “next time.” This statement is completely about the world as it already exists. But if this is the case, we have a huge problem. As a matter of fact, intelligent life outside of Earth either exists or it doesn’t. If we can make a statement of probability at all, it should be either 100% or 0%. Is the intelligent life probability statement just completely incoherent then?^{1}

Despite the problems with applying the above definition of probability to the Brexit statement and the intelligent life statement, these statements still intuitively *mean* something to us. We don’t react to these statements the same way we would react to a logically incoherent statement like, “The probability of Brexit by the end of the week is the letter A.” It seems that these probability statements are communicating *something* to us. But what is that something, exactly?

**B. Bins and Bayes**

I don’t think there’s a good way to reconcile these probability statements with the above definition of probability. As I see it, the solution lies in a different definition of probability, one that involves a person’s *subjective* *belief* in the likelihood of an event.

To illustrate, imagine that you have a hundred different bins. Each bin has a label on it with a different probability. So there’s a 1% bin, a 2% bin, etc. all the way up to 100%. When you make a statement like “There is a 40% chance of Brexit by the end of the week,” you are placing a slip of paper that says “Brexit will happen by the end of the week” in the 40% bin. Over time, if you make enough probability statements, these bins will be full of slips of paper. If you take the 40% bin, empty it out, and put all the statements that ended up true in a pile, ideally you’d end up with 40% of the slips of paper in that pile. Similarly, the 30% bin should end up with 30% of the slips of paper in the “true” pile, and so on and so forth for the rest of the bins.

For most of us, our bins won’t be so ideal. Maybe we’re too overconfident, and our 40% bin ends up with only 20% of its statements being true. Or maybe we don’t pay much attention when we make probability statements, and we put our statements in bins more randomly than we should. But the *goal*, at least, for somebody who wants to make accurate probability statements would be to match the percent of true statements in each bin to the label on that bin.

*This person is highly skeptical of events occurring in general.*

So now we have two different concepts of probability, each of which describes distinct kinds of statements that communicate different things. When you make a die-rolling-type probability statement, you are telling me about the number of events that would occur out of a theoretical large number of trials; this often reflects the inherent symmetry of a system, like a die or a deck of cards. When you make a Brexit-type (or intelligent-life-type) statement, you’re simply telling me which bin you’re deciding to place that statement in.

At first glance, die-rolling-type statements seem more mathematically rigorous. I mean there’s a whole mathematical equation and everything. Meanwhile, Brexit-type statements are based on…subjective feelings?

But there are actually lots of ways to make Brexit-type statements remarkably accurate. The key is to base these kinds of statements on available evidence. This evidence can be historical (i.e., how often have events like event E occurred in the past?); it can be predicated on an aggregation of the opinions of a large number of people, as in prediction markets, where people bet on the potential outcomes of different events; it can be based on statistics- and science-informed analyses of existing data, as in weather forecasting; or it can even be based on abstract philosophical reasoning. So making Brexit-type statements doesn’t consist of just choosing a probability that “feels” right; a lot of rigorous analysis can go into them.

As it turns out, probability theorists are a lot smarter than me and so they figured all this out a long time ago. What I call die-rolling-type probability they call *frequentist* probability (because probabilities are defined as frequencies of events), and what I call Brexit-type probability they call *Bayesian* probability (named after Bayes’ theorem, which tells you how to update your subjective probability based on new evidence).

What I find interesting is that, although Bayes’ theorem was originally published in 1763, the concept of Bayesian probability was rarely invoked until the last 20 years or so. Frequentist probability ruled for a long time^{2}, and many of the sciences are still dominated by frequentist statistics; standard statistical techniques involving p-values, confidence intervals, and null hypotheses are brought to you by frequentist statistics. But Bayesianism is on the rise; fields ranging from genetics to medicine to machine learning now make heavy use of Bayesian statistics.

I guess the “Bayesian revolution” hasn’t made its way into high school or undergraduate education, though. I certainly never came across the concept in my introductory statistics class; all I got was frequentist statistics. And as a result, you get this post.

**The Conclusion Box: Providing information to-go.**

A. Probability is often defined as the number of occurrences of an event over a large number of repeated trials, but this definition seems inadequate for most of the statements we make involving probability on a daily basis, such as “The probability of Brexit is 40%.”

B. These kinds of probability statements reflect subjective judgments of the likelihood of an event; this kind of probability is known as Bayesian probability, which has been seeing increasing usage across multiple sciences recently.

^{1}There are a couple of ways that you could imagine stretching the concept of a large number of repeated trials so that the Brexit and intelligent life statements would make sense. One way is to say that if you ran the universe over again from the beginning an infinite number of times, then in 40% of those scenarios Brexit would occur by the end of this week, or in 30% of those scenarios intelligent life would exist outside of Earth. A second way is similar; you could say that in all the universes that exist, in 40% of them Brexit will occur by the end of this week, and in 30% of them intelligent life exist outside of Earth. In each case, each universe counts as a repeated trial.

But both of these ways involve controversial metaphysical assumptions. In the first, you have to assume that the way the trajectory of the universe unfolds is not fully determined by initial conditions, i.e. the universe is non-deterministic; in the second, you have to assume the existence of a multiverse. No matter what you think about the truth or falsity of these assumptions, I don’t think most people mean to imply that they assume these things when they make a probability statement about something like Brexit.↵

^{2}The success of frequentist probability was partly derived from statisticians’ aversion to any hint of subjectivity, and also from the fact that some of the greatest examples of successful applications of Bayes’ theorem occurred in wartime, and were therefore classified. Apparently Bayesian thinking was critical to the British cracking the German U-boats’ Enigma code during WWII, but evidence of this was buried after the war because the British didn’t want the Soviets to know how they did it.↵