What percent of split tests lose?

What percent of split tests lose?

This is the second in a series of posts revealing secrets no one tells you about AB Testing.  Since DoWhatWorks discovers and assesses tests from thousands of the world’s largest companies every day, we have a unique view of trends across experiments.

In Episode 1, we saw that even some of the world's top brands can only run about 12 experiments per year for key parts of their websites.

In this installment, we come to you with even more bad news… Most split tests don't win.

According to Optimizely, only 20% of their client's tests move a metric. You have better much odds betting on red in a casino.

So, with these formidable odds in mind, what can you do about it?

Don’t repeat your past failures

It's common for companies to repeat their own mistake. Seemingly obvious solutions can lure you into a trap. New employees often rerun old (losing) tests, only to see the same (losing) results.

Our co-founders saw this cycle firsthand when they worked at Meetup. Every new product manager who took over a monetization flow wanted to add a sales page to the start of it. They were convinced their sales page would be different. And every single time, adding the sales page to the flow would lose. This cycle repeated itself at least six times in a decade and consumed over six months in the process.

Of course, there will always be reasons to retry failed tests; some things that didn’t work in the past can work in the present (for example Kozmo.com was a giant failure, but DoorDash is a unicorn).

The question is, “Why will things be different this time?” Has the landscape changed? Are you trying something fundamentally different in your execution? It’s okay to take repeat swings at something, but repeating experiments that failed in the past needlessly wastes everyone’s time.

If you have past results, use them. If you don't have data, run simple tests to learn fast or look outside your organization for insights that can help you drive results.

Don’t repeat others’ failures

A smart person learns from their mistakes, but a brilliant person learns from everyone else’s.  If you can find out what has worked and failed for others, you can avoid things that consistently fail for others. That time can be spent on the things that actually have a shot to succeed.

Unfortunately, results tend to get trapped at individual companies. You can ask peers at other companies and scour the web for insights to get some ideas. We built DoWhatWorks to solve this problem and give you access to everyone's tests at scale.

If you only get 12 shots a year to optimize an experience, make every shot count. Relative to average win rates,  a couple extra wins a year can 1.5X - 2X your win rate. You can do it if you use data to pick things that are more likely to work.

Don’t stop testing. Just make sure you focus your tests on what is most likely to matter.

If you want to access the DoWhatWorks private beta early go here.