Sunday, May 16, 2010

A/B Testing

A/B testing, consists in the development of two versions of a page, and show different versions to users randomly.
These users will be identified in either the A or B track group, and the analytics evaluate how do they perform to go with the winning page version.

This is useful to explain why the conversion rate of visitors into customers of a website is not as good as expected and helps us to take the necessary actions to improve it just by applying some small changes like for example:
- Removing form fields.
- Adding relevant form fields.
- Marketing landing pages.
- Different explanations.
- Having interstitial pages.
- Email contents / subject.

This kind of testing doesn't substitute: talking to users, usability tests, acceptance tests, unit tests and of course... thinking.

Behind the scenes
 
The first time a visitor lands to the web site a randon version of the page is created, saved and displayed on the browser. The following times that the visitor comes to the site, the previous version is found and returned (usually by a cookie value stored on the client device) so the random pick is permanent.

This behavior leads to some development considerations. The most important of them is that the tests are temporary and when one A/B test finishes, one version of the page wins and the rest of them are deleted. As developers normally want maintainable code and they will need to write a lot of A/B tests the process should be as easy as possible and eliminate annoying steps by building a framework for this particular purpose.

Conversion funnels improved

To create useful A/B tests we should know the particular steps in our site to convert visitors into customers, that is, the conversion funnels.
Most A/B tests concetrate on only one step in the funel and each step can be tracked with some metric like for example:
- Sessions, sessions with registration
- People who searched, who viewed detail page, contacted, leased
- People who saved favorites, started a cart, completed purchase
- People who saw at least 3 pages, clicked on an ad

When we are analyzing the results of an A/B test we should take into account to compare apples to apples. That means that for example traffic behaves differently at different times and we might not compare visits received on a friday night with the ones from a monday morning.

In case of our portal interface is changed and tested using A/B we should only include in our analysis the new users of each page version, as we know that people don't like UI changes. A priori, it can cause the new version to lose on the A/B test because our old visitors are used to the older interface.

Running multiple tests

A/B tests can take a while to finish and we have many A/B tests that we'd like to run, so it would be nice not having to wait. It can be achieved by running multiple tests at once. To do this we need to assign people to tests randomly, so different tests will be run at the same time.

Google's Website Optimizer

If you want to get going on A/B testing and don't want to worry about doing any of the reporting then use it.
However be aware that it has a number of major limitations.
- It works by using JavaScript to inject static content into your page which can be a problem when doing dynamic A/B testing.
- It doesn't support multiple metrics.
- It can't be used to A/B test email content.

No comments:

Post a Comment