1,000,000 monkeys can’t be wrong
Multivariate Testing (MVT) is starting to earn a place in the pantheon of buzzwords like cloud computing, service-oriented architecture, and synergy. But is a test the same thing as an experiment? While I am not a statistician (nor did I stay at the Holiday Inn last night), working at MarketingExperiments with the analytical likes of Bob Kemper (MBA) and Arturo Silva Nava (MBA) has helped me understand the value of a disciplined approach to experimental design.
What I see out there is that a little knowledge is indeed a dangerous thing. Good intentions behind powerful and relatively easy-to-use platforms like Omniture® Test&Target™ and Google® Website Optimizer™ have generated a misleading sense that as long as a multivariate test is large enough (several hundred or more combinations being tested), at least one of the combinations will outperform the control.
This notion has become the value proposition of a growing number of companies offering services around either the big-name or their own (simpler, and often therefore easier to set up) MVT tools. They are ostensibly betting on the technology, and not on a systematic approach to experimental design or any particular UI/UX (user interface/user experience) optimization theory.
Even though, as Bob has pointed out to me, it is reasonable that an MVT setup with a billion combinations may not yield a lift over the control, my contention is that the risk-weighted business cost of a dissatisfied customer is low. Therefore, little stops the burgeoning MVT shops from safely offering a “100% lift guarantee.” Just like the proverbial million monkeys with typewriters, somewhere among thousands of spray-and-pray treatments their MVT tests are expected to produce one that’s better than the rest.
1 monkey with a stick
One major difficulty with testing in general becomes painfully obvious with MVT: the more treatments, the longer the test will run. For most companies, what looks at first like a great test may require a year’s worth of traffic to get statistically valid results.
In response, one emerging MVT service model offers getting to a “lift” faster by using adaptive elimination of likely underperformers, in exchange for the test results providing limited information beyond identifying the winner. Such test results are not as useful as their full-factorial brethren for designing subsequent tests because adaptive elimination of treatments makes it difficult to extrapolate the psychological factors and consumer preferences responsible for the test outcome. The immediate business benefits, however, are more immediate.
So, where exactly is the problem? As marketers, are we in the business of employing the scientific method to design graceful experiments or is our fiduciary duty to get measurable results? I humbly suggest that as marketing professionals, we should neither bet on nor be satisfied with just one test, no matter how successful it is.
The bad news and the good news is that we must design an experimental plan to optimize continually, to learn from preceding test results, and to respond to changes in customer preferences, market conditions, and our ability to segment data and traffic. Expertise in experimental design and understanding how to interpret results simply cannot be replaced by set-it-and-forget-it technology (yet).
Economy of testing
That is not to say that MVT provides incorrect results. The results are mathematically valid, even if they do require a long time to obtain. At the same time, from the business point of view, investment into experimental design expertise is expensive. Understanding volumes of published research consumes valuable time. The 100% guarantee sure sounds good.
And so the “guaranteed lift” offers will appeal to the spendthrift marketers who are yet to delve into the science of optimization. The critical issue in the economy of testing is whether methodical design of experiments is likely to provide greater ROI through an interpretation-driven sequence of test iterations than a successful, but terminal one-off test. Our research supports the former.
2010 may become the year of multivariate, but I hope that it will also quietly set the stage for an upcoming year of ROI-conscious design of experiments.
How do you use multivariate testing? Have you created an experimentation plan or do you rely on a series of one-off tests? Share your triumphs and concerns in the comments section of this post or start a conversation with your peers in the MarketingExperiments Optimization group.