Bayes' Theorem Calculator

Enter prior probability, sensitivity (true positive rate), and false positive rate to calculate the posterior probability — how likely a positive result actually reflects the true condition, with complete step-by-step Bayesian calculation.

Enter your values above to see the results.

Tips & Notes

  • The prior probability (base rate) is the most important input and the most commonly ignored. A 99% accurate test for a disease affecting 1 in 10,000 people still produces mostly false positives — prevalence dominates.
  • Specificity = 1 − False Positive Rate. A test with 5% false positive rate has 95% specificity. High specificity (low false positive rate) is what makes a positive test trustworthy for rare conditions.
  • Bayes' theorem is iterative. After a positive test, use the posterior as the new prior for a second independent test. Two positive tests yield a much higher posterior than one, even if each test has the same accuracy.
  • Sensitivity and specificity trade off — improving one often worsens the other. A more sensitive test catches more true cases (fewer false negatives) but typically has more false positives. Context determines which matters more.
  • P(A|B) ≠ P(B|A). "P(positive test | disease) = 95%" is not the same as "P(disease | positive test) = 95%". This confusion (called the prosecutor's fallacy) is responsible for many misinterpretations in law and medicine.

Common Mistakes

  • Ignoring the base rate (prior probability). The most common Bayesian error is assuming a 95% accurate test means a positive result is 95% likely to be correct. For rare conditions, a positive result may still be more likely a false positive than a true positive.
  • Confusing sensitivity with posterior probability. Sensitivity P(+|disease)=0.95 is the probability of a positive test given you have the disease — not the probability of having the disease given a positive test. These are completely different quantities.
  • Using sensitivity as the false positive rate. The false positive rate is P(+|no disease) — independent of sensitivity. A test can be both highly sensitive (few missed true cases) and have many false positives (low specificity) simultaneously.
  • Applying Bayes' theorem when tests are not independent. Two positive results from the same test type on the same patient may not be independent — if the first was a false positive, the second may be more likely to be one too.
  • Treating posterior probability as certainty. P(disease|positive) = 0.80 means there is an 80% chance of disease — still a 20% chance of being disease-free. Posterior probability guides decisions under uncertainty, not absolute conclusions.

Bayes' Theorem Calculator Overview

Bayes' theorem is the mathematical formula for updating probabilities when new evidence arrives. It answers questions like: a medical test is 95% accurate and comes back positive — what is the actual probability you have the disease? The answer is almost always surprisingly lower than 95%, because the test's accuracy must be weighed against how common (or rare) the disease is. Bayes' theorem formalizes exactly this reasoning.

Bayes' theorem:

P(A|B) = P(B|A) × P(A) / P(B)
Expanded form using total probability:
P(A|B) = [P(B|A) × P(A)] / [P(B|A) × P(A) + P(B|¬A) × P(¬A)]
EX: Disease prevalence P(disease) = 1% (prior). Test sensitivity P(+|disease) = 95%. False positive rate P(+|no disease) = 5%. → P(disease|+) = (0.95×0.01) / (0.95×0.01 + 0.05×0.99) = 0.0095/(0.0095+0.0495) = 0.0095/0.059 = 0.161 → only 16.1% chance of actually having disease despite positive test
Why the result surprises most people — base rate neglect:
EX: Same test, 95% accurate. Disease prevalence = 10% (more common). → P(disease|+) = (0.95×0.10)/(0.95×0.10+0.05×0.90) = 0.095/0.140 = 0.679 → 67.9% — prevalence dramatically changes the interpretation
Bayesian terms explained:
TermSymbolMeaningExample
Prior probabilityP(A)Probability before new evidenceDisease prevalence in population: 1%
Likelihood (Sensitivity)P(B|A)P(positive test | has disease)True positive rate: 95%
False positive rateP(B|¬A)P(positive test | no disease)False alarm rate: 5%
Posterior probabilityP(A|B)P(has disease | positive test)Actual probability: 16.1%
Bayes' theorem is sequential: the posterior from one calculation becomes the prior for the next when additional evidence arrives. Two independent positive tests for a 1% prevalence disease: after first positive, posterior=16.1%. Use 16.1% as the new prior for the second test → P(disease|two positives) ≈ 78.5%. Each positive test dramatically increases confidence.

Frequently Asked Questions

Bayes' theorem updates your probability estimate when new evidence arrives. Before a test (prior): 1% chance of disease. Test comes back positive (evidence). Bayes' theorem calculates: what is the probability of disease NOW, given this positive test? The answer accounts for the test's accuracy AND how rare the disease is. High test accuracy + rare disease = lower posterior than you might expect.

Because false positives dominate when prevalence is low. For 1% prevalence, 99% of people do not have the disease. A 5% false positive rate means 5% of those 99 people test positive — that is 4.95 false positives. Only 1% have the disease, and 95% of those test positive — that is 0.95 true positives. So out of 5.9 total positives, only 0.95 are real: 0.95/5.9 = 16.1% posterior probability.

Sensitivity = P(positive test | has condition) — how well the test detects true cases. High sensitivity means few missed cases (low false negative rate). Specificity = P(negative test | no condition) = 1 − False Positive Rate. High specificity means few false alarms. A perfect test has both at 100%. In practice they trade off: lowering the detection threshold increases sensitivity but decreases specificity.

Apply it sequentially. First evidence: prior=0.01, compute posterior₁=0.161. Use posterior₁ as the new prior for second evidence. Second positive test: prior=0.161, same sensitivity=0.95, same FPR=0.05 → posterior₂ = (0.95×0.161)/(0.95×0.161+0.05×0.839) = 0.153/0.195 = 0.785. Two independent positive tests update from 1% to 78.5% — a dramatic increase. After the second positive test: prior=0.161, same sensitivity=0.95, same FPR=0.05 → posterior = (0.95×0.161)/(0.95×0.161+0.05×0.839) = 0.153/0.195 = 0.785 (78.5%). Each piece of independent evidence compounds the update.

False discovery rate (FDR) = proportion of positive test results that are actually false positives = 1 − posterior probability. If P(disease|positive)=0.161, then FDR=0.839 — 83.9% of positive tests in this scenario are false positives. FDR is crucial in population screening. P(disease|positive)=0.161 → FDR=0.839 → 83.9% of positive tests in this scenario are false positives — even with a 95% accurate test applied to a 1% prevalence disease. This means 84 of every 100 positive results need follow-up confirmation before any clinical action.

Confusing P(evidence|innocent) with P(innocent|evidence). Example: 'DNA matches 1 in 1 million people' (P(match|innocent)=0.000001) is incorrectly taken to mean 'probability of innocence = 0.000001'. But with 8 billion people, 8,000 people would match by chance. If the suspect was identified solely by the match, P(innocent|match) could be very high. Bayes' theorem correctly separates these. With 8 billion people and a 1-in-million DNA match rate, about 8,000 people share the match by chance. If the defendant was identified only because of the DNA match (no other evidence), and matches are random, P(innocent|match) = 7,999/8,000 = 99.99%. The prosecutor's fallacy ignores the enormous pool of potential matches.