PILLAR
A/B testing is the operator’s version of opinion settlement. Two variants, statistical significance, and a winner the data picks — not the highest-paid person in the meeting.
Most teams fail A/B testing in one of three ways: they test too few hypotheses to statistically notice winners, they stop tests at the wrong moment, or they test on traffic volumes that make results meaningless. Industry average is 2–3 tests per quarter. The testing programmes that transform businesses run 30+.
Every post walks through one part of that work: hypothesis prioritisation, sample-size maths, multi-variate vs A/B, reading interaction effects, and shipping winners without regressions.
COMING SOON
Hypothesis prioritisation frameworks, sample-size maths, multi-variate vs A/B, and reading interaction effects.
Book my free AI audit →