Baysian Statistics
Bayesian statistics is a way of reasoning about uncertainty. Instead of asking “Is there a significant difference?”, it asks:
“Given the data I’ve seen, what is the probability that one variant is better than the other?”
This makes Bayesian methods especially intuitive for A/B testing.
How Bayesian estimation works
Bayesian inference combines two things:
-
Prior belief
What we believe about the conversion rate before seeing data.
In A/B testing you usually use a neutral prior (Beta(1,1)), meaning:
“We don’t assume anything yet.” -
Observed data
Conversions and views from each variant.
These are combined to form:
- Posterior distribution
A probability distribution that reflects:
“What conversion rates are most likely given the data?”
This posterior is what we actually analyze.
Why it's useful in A/B testing
Bayesian A/B testing gives you:
A direct probability
Example:
“Variant B has an 87% probability of beating A.”
This is something businesses can act on.
Credible intervals
These tell you the range of conversion rates or lift that are actually plausible, based on the data.
Robustness with small samples
Bayesian methods don't require large sample sizes to be meaningful, unlike many frequentist approaches.
The Beta distribution
For binary outcomes (convert/not convert), the Bayesian model uses the Beta distribution, because:
-
It naturally models proportions (like conversion rates)
-
It updates cleanly with new data
-
It stays within 0–1
When you feed in:
conversions → successes
views - conversions → failures
You get a smooth probability distribution for the true conversion rate.
Monte Carlo sampling
To compare variants, we draw many random samples (e.g., 100,000) from each variant’s posterior distribution.
Here’s what happens:
-
Draw thousands of samples of conversion rate for A
-
Draw thousands for B
-
Compute the lift on every draw
-
Sort all those lift values
-
Pick the middle chunk — for 95%, it’s the middle 95% of them
That middle range is your credible interval.
No complicated formulas — it’s literally based on the simulated outcomes.
From probability to decisions
Bayesian statistics tells you how likely one variant is to be better than another.
What it does not tell you is:
- Whether the effect is real or just noise
- Whether the effect is meaningful for the business
- Whether the result is stable enough to act on
Turning Bayesian probabilities into safe product decisions requires additional rules around sample size, runtime, practical significance, and guardrails.
To learn how to evaluate Bayesian A/B test results correctly and avoid false positives, see
How to evaluate A/B test results correctly.