Not too long ago, I’d run yet another high-profile conversion test on a rather popular e-commerce site.
The hypothesis was clear and expectations high: change major elements on the front page, expecting at least a 10% overall lift. An increase of few hundred grand in annual sales, easy.
Hint: it involved getting rid of yet another image carousel and introducing real sales copy. As just about always is the case these days.
The test ran for a solid two weeks total, but the results looked rather shaky. In the end, all metrics looked inconclusive; adding to cart, order counts and even engagement was about the same as before.
Could I have been so wrong? No effect whatsoever?
Turns out, nope.
I finally had the good sense to comb through the test period in Google Analytics, creating advanced segments for both of the variations used in the A/B test.
Well, whaddaya know.
Even if the usual metrics were all but the same as before, the metrics that ultimately matter… a different story.
A 35% improvement in both Average Order Value and Total Revenue is nothing to sneeze at.
Considering this online store’s scale, it’ll likely result in multiple seven figures in annual improvement.
IF they’ll have the guts to actually implement it, but that’s a different story.
The takeaway is this: when you analyze the results of your latest A/B test, never be content just looking at the obvious numbers. Go through all the data gathered during the test, you just might see changes that make all the difference. It takes just a few minutes of extra effort.