30 March 2005

What Is Bayes' Theorem and Why Does It Matter?

Let's find out if Joe uses drugs. We administer a drug test for heroin use, and the result comes back positive.

Given this result, how sure are we that Joe uses heroin?
  • Good enough for me. Case closed!
  • How good is that test?
  • Whoa! Let's apply Bayes' Theorem!
Say what? What the heck is Bayes' Theorem, and why should we (or Joe) care?

Let's say we already knew that:
  • 3% of the general population are heroin users. So Joe would have a 3% chance of getting a positive result to start with. This is called the baseline or "prior" probability.
  • This particular test correctly identifies heroin users 95% of the time (5% of the time it would come back negative even though the sample was from a user). This is the test's "sensitivity".
  • Using this test, non-users are correctly identified 90% of the time (10% of the time a non-user tests positive). This is the test's "selectivity".
Does this information, which we knew before we tested Joe, affect our determination about Joe's use or non-use of heroin? You bet it does!

portrait of Thomas Bayes
Thomas Bayes
Fortunately, London Nonconformist minister Thomas Bayes devised a way to sort this out, back in the 1700s. His insight was rediscovered
portrait of Laplace
Pierre-Simon Laplace
later in that century by French mathe­matician Laplace, and it has developed over the years as "Bayes' Theorem". It is used to estimate probabilities, given knowledge of certain related probabilities.

Joe's Case

In Joe's case, the relevant formula, applied to the facts above, tells us that there is just a 22.7% chance that Joe uses heroin, even though he tested positive. Based on this result, we can conclude that it is more likely that Joe is a user than that the average person is a user, but it is still much more likely that the test is wrong than that Joe uses heroin. The test provides incremental evidence that Joe uses heroin, but the total evidence we have so far still is far from supporting that conclusion. (This example was adapted from a site at Stanford.)

Bayes' Theorem

Bayes' Theorem is an important tool in understanding what we really know, given the evidence and other information we have. It helps incorporate "conditional probabilities" into our conclusions.

There are several mathematical formulas related to Bayes' Theorem, but they generally boil down to this:

conditional probability = prior probability × predictive power of evidence

Or,

What we know, given the evidence = what we knew even without the evidence, adjusted by how good that evidence is.

Bayes' Theorem tells us quantitatively how to update our prior information, given new evidence.

Another Example: Breast Cancer

Say we know, from lots of studies, that 1% of women at age forty have breast cancer. (This means that 99% of women of that age do not have breast cancer.) This is the "prior probability". Mammograms can give us additional information (the "evidence"). Say we know (about the quality of the evidence) that 80% of the women who do have breast cancer will get a positive mammogram (the test's "sensitivity"), and 9.6% of the women who do not have breast cancer will also get a positive mammogram (the test's "selectivity").

If a woman in this age group gets a positive mammogram, how likely is it that she actually has breast cancer? Most doctors, given this statement of the problem, get it wrong. Only 15% of doctors can answer this question correctly! (Doctors, like lawyers, are shockingly ignorant of Bayes' Theorem and its implications.)

The answer is that if a woman gets a positive result on her mammogram, there is a 7.8% chance that she actually has breast cancer. Of all the women who get positive results, only 7.8% actually have breast cancer.

This is calculated as follows:
Out of 10,000 women, 100 have breast cancer (the 1% "prior probability"). Eighty of those 100 will have positive mammograms (calculated from the 80% sensitivity of the test). From the same 10,000 women, 9,900 will not have breast cancer (the other 99%) and of those 9,900 women, 950 will also get positive mammograms (the test's 9.6% selectivity). This makes the total number of women with positive mammograms 950+80 or 1,030. Of those 1,030 women with positive mammograms, 80 will have cancer. Expressed as a proportion, this is 80/1,030 or 0.07767 or 7.8%.
So what good was it to have a mammogram? If a woman does have cancer, the prior probability of 1% is shifted substantially to 7.8% if her mammogram is positive. Only a small fraction of women who don't have cancer will have positive mammograms (9.6%). Obviously further tests will be called for, but the population of all women (or 10,000 in this example) has been focused down to 1,030 women with positive tests, reduced by 90%. And the proportion of that smaller population who do have cancer has shifted from 1% (the prior probability) to 7.8% (the conditional probability). The probability of cancer, if you are in this smaller sample, has increased by almost 8 times. That's useful information for you and your doctor, though not conclusive.

So use of Bayes' Theorem helps us know what we know. The results are often counterintuitive. That is why Bayes' Theorem is so important.

For a revealing illustration of how different Bayesian thinking can be from "common sense", and the impact that difference might have in the real world, look at the Case of the Careless Cab. There's a link there to bring you back here when you are done.

Further Resources

A lot of this discussion was lifted from this informative and fun site.

You might find this post on visualizing Bayes' theorem helpful.

Wikipedia also has a good article.

The portraits are from Wikimedia Commons at https://secure.wikimedia.org/wikipedia/commons/wiki/File:Thomas_Bayes.gif and https://secure.wikimedia.org/wikipedia/commons/wiki/File:Pierre-Simon_Laplace.jpg

Technorati tags: , , ,
,

No comments: