by Jan Malakhovski, version 1.0.3, created , published , last updated
Bayes' theorem allows you to compute a probability of a hypothesis assuming a piece of evidence. But what do you do when you have several (hopefully independent) tests all pointing into different directions with different probabilities? Applying the classical formula of Bayes' theorem expressed using the law of total probability multiple times in a row for each piece of evidence naively will produce an incorrect result! The simplest way to do this properly is to switch to using Bayes' theorem in odds form, multiply likelihood ratios, and then convert back to probabilities. What? Why? This article defines everything, derives all the relevant formulas from scratch, and then demonstrates how to use them with some examples.
Consider the following. You think you might have caught some awful illness, you get a lab test done, the advertisement materials say the test is 99% accurate (as in, 99% tests of sick people come back positive and 99% tests of non-sick people come back negative), it comes back positive. What is the probability you are actually sick?
Without more information, it’s actually impossible to say! It depends on the probability of being sick with that illness in the first place.
For demonstration purposes, let’s say that this prior probability is 1%. If we make 10000 people take that lab test:
100 () of them will be sick, and 99 () of those will have a positive test result;
9900 () of them will be healthy, but 99 () of those will also have a positive test result.
Therefore, if the above lab test came back positive, the probability of you being sick is only 50% and the “99% test accuracy” claim is misleading.
Moreover, 1% prior probability of being ill is rather high for an average illness. For example, in 2022 HIV/AIDS — one of the most common wold-wide diseases — had about 0.5% of the world-wide population infected. So, a “99% accurate” positive HIV test — which is what most of such modern tests claim to be — has a probability of being true of about 33% ().
Most laypersons are quite surprised the first time they hear about this, even though this computation is a simple and widely-known corollary of a well-known Bayes’ theorem (which is going to be discussed below) and it is immediately relevant when a person tests positive for an awful illness. It seems, however, that at least some people are starting to appreciate it. A fair fraction of people I mentioned this to at least heard something about this before, which makes me feel optimistic about this.
As a side-note, personally, I think it’s absurd that all relevant probabilities don’t simply get automatically computed and listed on all lab test results and printouts given to doctors and patients. Doing that would prevent a lot of unneeded stress. I kind of understand why they do not, but I think the assumptions behind that reasoning are wrong. The first lab that starts printing these probabilities while explaining them properly will simply capture the market of customers that care about precision, not get infinite numbers of complaints. Personally, I would rather use a more expensive lab that tracks its own accuracies — via patient diagnosis outcome tracking and/or publicly shared videos of them performing equipment calibration — than a cheaper one that does not.
(If some the following does not make sense, don’t worry, it will all be explained in the rest of this article and followup articles in this series.)
But what if you have several tests or pieces of evidence? How do you combine all those probabilities? If you work through the math carefully, you’ll notice that you can’t simply apply the classical formula of Bayes’ theorem expressed using the law of total probability multiple times (even though, this seems like a reasonable thing to do after reading the corresponding Wikipedia article), because it will start accumulating errors in the updated priors.
But if you could do it right somehow, then this kind of reasoning, Bayesian reasoning, could be applied to almost anything.
For example, when lab testing, it would be quite useful to have a way to combine multiple test results performed by independent labs using different equipment and methods (and, thus, different accuracies) to produce a total probability of the hypothesis under all test results.
This article is going explain this use case.
Similarly, when you read scientific studies, for each study and each hypothesis it tests, a p-value (or a confidence interval, which is a more verbose way to represent the same thing) informs you about the probability of that study having those observations given that its hypothesis is false. Meanwhile, Bayes’ theorem computes what you actually want to know: the probability of that hypothesis being true (or false) given the observations. But it only really becomes useful in practice if you can combine results of several independent studies testing the same hypothesis.
The next article in this series (coming soon) is going to explain this use case.
Similarly, when you read news articles, you could also use the same method to approximate probabilities of stated claims being true from trustworthiness probabilities of different sources/journalists/reporters/podcasters/news organizations/etc, topics, geographical locations, stock market changes, etc. Though, this use case is more complex, since you can’t assume different pieces of evidence to be independent here. This is essentially what intelligence agencies are supposed to do for their governments, when they aren’t collecting evidence. (What they actually do most of the time is sabotage evidence collection by others and power seek instead.)
A future article in this series (coming not so soon) is going to explain this use case.
So, this seems very useful. How can you do this probability math properly?
Eliezer Yudkowsky’s writings repeatedly mention that one needs to, to paraphrase, “switch to using Bayes’ theorem in odds form and then simply multiply likelihood ratios”. When I read that the first time, the problem with using the classical Bayes’ theorem multiple times in a row was not obvious to me, so I was really confused. What? Why? So, I’ve read his explanation of how to do this and, later, a supposedly better explanation linked by him, which replaced my “Odds form? Likelihood ratios? What? Why do I even need this?” questions with a smaller set of “Can I have a proper complete formula for doing this, please? Why do I even need this?” questions. I then supposedly managed to piece that formula together by myself from the above articles, the relevant Wikipedia article, another one, and a bunch of web searches. The resulting formula indeed looked quite useful for simplifying long computations, if nothing else. So, tried to do compute some examples using that formula, and it produced obviously wrong results.
That discouraged me from pursuing this line of research for a bit, but then I read more of Yudkowsky’s writings and it seemed like such a useful mental tool to have. If only I could make it work!
So, one day I decided to sit down and carefully work out all the relevant probability math from scratch myself. Hopefully, I though, it would make obvious where my previous mistakes were. (It did.) Then, later, I married that to David Colquhoun’s writings on Bayesian reasoning over scientific results, and the combined result became one of the most re-visited items on my “personal wiki” and one of my most useful mental tools ever. Which is why I decided to share this.
In the context of probability theory I like to think of random events as being Boolean predicates/conditions over the states of the world and probabilities as being ratios where is a function that returns the number of the states of the world for which its argument evaluates to and is the predicate that admits all possible world states
In other words:
Having defined the above, now we can define:
“not ”, denoted as , as
which, of course, implies
“ and ”, denoted as (sometimes, as ) as
“ or ”, denoted as , as
Also of note is the fact that because the size of intersection of and denoted gets counted twice in .
We can define probability of “ assuming ”, also known as “conditional probability of under assumption ”, denoted as , as I.e., it’s a probability of in a smaller world were is always true.
When I was studying this the first time in university, I remember being confused. Shouldn’t it be defined as follows? It’s not! If nothing else, with this definition would still have in its denominator, which would be incorrect.
In other words, is not an operator, is not an expression, is an atomic definition. The proper way to think about this is to consider to be a syntax sugar for
Of special note is the fact that the above definition of trivially implies We are going to use this observation quite a lot in what follows.
For a set of events for , which are mutually exclusive and collectively exhaustive the following formula, called the law of total probability, holds
This can be specialized to the following case. For an event and its negation , the following holds We are going to use this observation multiple times in what follows.
Note that, since operator is associative, can be applied to an arbitrary number of events by taking and , which gives
That expansion can then be repeated for and other similar sub-expressions. This observation gives us the following formula, called the chain rule
Consider the following matrix of outcome frequencies when testing a single hypothesis against its negation using a single one-bit-of-information test procedure
Hypothesis is true | Hypothesis is false | |
---|---|---|
Test is positive | True positives | False positives |
Test is negative | False negatives | True negatives |
For the original disease testing example from the beginning of introduction this matrix would look as follows
Sick | Healthy | |
---|---|---|
Test is positive | 99 | 99 |
Test is negative | 1 | 9801 |
Most testing procedures specify these matrices by giving the following three probabilities:
hypothesis’ prior probability, which is i.e., relative weight of the first column of the matrix with respect its whole;
testing method’s sensitivity, which is the probability of the true positive result assuming the hypothesis is true, i.e., or the relative weight of cell with the relation to the whole of the first column;
testing method’s specificity, which is the probability of the true negative case under the assumption that the hypothesis is false, i.e., or the relative weight of cell with the relation of the whole of the second column.
Given these three values and the number of trials we can compute the contents of each cell of the above matrix by simple multiplications.
The derivation for the classical form of the Bayes’ theorem can now be done as follows. By definition therefore, taking into the account that operator is commutative Throwing away the part and dividing both leftover parts by gives us the following formula, called the Bayes’ theorem That’s it.
In practice, it’s useful to treat the and in the above as a hypothesis and its evidence which, with its denominator expanded using the law of total probability, becomes which is just
This gives the classical way to compute the answer to the first example above
Note, however, that we can multiply both parts of the above fraction by the number of trials which then becomes which is the formula I used in the examples at the beginning of introduction.
The TP-FP formula is most useful for quick mental ballpark calculations for a single test. The prior-sensitivity-specificity formula is most helpful for automated calculations.
But what we actually want to derive is a formula for a single hypothesis with multiple pieces of evidence where, for brevity, was renamed to simply and became a set of for , where is the number of tests.
So, let’s derive that now.
The classical formula for Bayes’ theorem with multiple tests can be derived similarly to the previous version by applying the chain rule to as follows Then, by shuffling elements under , since that operator is both commutative and associative, and doing it again we get Combining both the same way we did before, we get
When all tests are independent from each other then, by definition of , this implies which simplifies the above formula to
This, by the way, is almost the exact same formula Naive Bayes classifier uses, except Naive Bayes ignores the constant for simplicity, since it’s a classifier and the actual probability value is unimportant there.
Note, however, that the law of total probability turns the above formula into which is
This formula works, but it’s a bit annoying to use: each time you make a new test, you have to remember to use the original in the denominator or recompute the whole thing from scratch.
Or, to say it another way, trying to compute the above by applying single-evidence Bayes’ theorem times by doing and then taking as an answer produces an incorrect result because each and were given for the original , not the updated values.
Luckily, there’s a slightly more general and yet simpler way to do this.
Consider a ratio of conditional probabilities of two hypotheses and under the same evidence or, similarly, for multiple pieces of evidence which, when all tests are independent, gives us
Note that this formula no longer mentions any values, so the law of total probability is no longer needed here.
Now, let’s define
These represent relative odds of and assuming and with no assumptions, respectively.
With this we can rewrite the above naive-bayes-ratio
formula as
In the literature, it’s common to also define which represents the likelihood ratio of observing under two different hypotheses and .
This allows us to rewrite the above formula as
This formula is what Bayesian reasoning is all about: you take a ratio of prior probabilities of two hypotheses you want to compare and then repeatedly update your knowledge by multiplying your current odds by likelihood ratios for each piece of evidence.
We can still use the above formulas to compute for a singe hypothesis as follows.
Firstly, note that the above definitions of imply that
Secondly, note that if then and thus
Combining these gives us the way to convert odds to probabilities
Thus, if substitute and into the above naive-bayes-odds
formula it becomes and we can compute as
So, for instance, for the original disease testing example from the beginning of introduction thus
To demonstrate the real usefulness of this, let’s say you did three independent HIV tests at different labs, they came back positive, then negative, then positive again, and you want to know your probability of being sick. Let’s say the first test had of because it was a cheap self-administered test, the second test had probabilities and was performed by a closest available lab, and the third test had probabilities of and was performed by sending your blood to a well-known lab.
With that we can compute Note how for the second test we multiply by the since it came back negative.
Therefore
This seems like a lot to you, so you go and take another very expensive test, which claims of and comes back negative, what is the probability of being sick now?
After some practice this actually becomes pretty easy to do in your head, giving you a nice mental tool for quickly approximating ballpark odds of something under multiple pieces of evidence.
This section is a work in progress.
See Bayesian networks.