All terms
Analytics

Statistical Significance

The probability your A/B test result is real, not random.

Statistical significance is the probability that the difference between two variants in an A/B test is genuinely caused by the change, not random fluctuation. Expressed as a confidence level (95% is standard) or a p-value (p < 0.05 means 95% confidence).

A "significant" result at 95% means there's still a 5% chance you're wrong. If you run 20 tests with no real effects, one will hit "significance" by pure chance. This is why preregistering hypotheses and committing to sample sizes matters.

Significance ≠ practical importance. A 0.1% lift can be statistically significant on huge traffic but operationally meaningless. Define a minimum detectable effect before you start so you don't ship trivial wins.

Related terms

Stop reading definitions. Start applying them.

Postwyse is the marketing platform built on these ideas.

Content calendar, AI agents, SEO + AIO + GEO tools, social management, brand safety, and 36 integrations.