Multivariate tests enable you to test combinations of changes simultaneously. The idea is to change several elements on a page simultaneously and to identify which of the possible combinations has the biggest impact on the tracked indicators.
What is a multivariate test?
During an A/B test, you may only change one element at a time (the label of an action button, for instance) in order to determine its impact. If you changed the label and the color of a button simultaneously (e.g. blue “buy” button and red “purchase” button) and noticed an improvement, how would you know if it was the label or the color that truly contributed to this performance? The multivariate test enables you to answer this question.
The purpose of multivariate tests is threefold:
- They save you from having to carry out several A/B tests, as a multivariate test can be likened to several A/B tests carried out simultaneously on the same page,
- They determine the impact of each variable in measured gains,
- They enable you to measure the interactive effects between several supposedly independent elements (e.g. page title and illustration visual).
Multiplying the number of variations will mechanically reduce the sample assigned to each combination. An MVT will remain active for longer than a standard A/B test because a certain amount of traffic is required on the tested pages in order to guarantee reliable results.
Setting up an MVT
Let’s use the previous example: our page features a green button that says “Order”. We want to test the label and the color of the button:
- Let’s start by creating a multivariate test.
- This involves 5 steps.
1) Main information: enter the URL of the subtests. A multivariate test generally concerns one page only, however, it can be applied to multiple pages.
By default, only one subtest is created. To add more subtests, click the icon below the subtest.
In our example, the URL of the subtests remains the same. We will also have the following elements:
- Subtest 1: button label test
- Subtest 2: button color test
Don’t forget to save changes.
2) Variation editor: create your variations
Configure the changes you want to apply
We will now implement one change per variation.
- Select the button label subtest from the drop-down list,
- In variation 1, change “Order” to “Buy”,
- In variation 2, replace the text with “Purchase”.
Make sure you rename each variation; this will make it easier to analyze the report.
- Select the button color subtest from the drop-down list,
- In variation 1, change the button color from green to blue,
- In variation 2, change the button color from green to red.
We have thus changed 2 variables (the label and the color) and each of these variables features three versions (the original and the two variations).
If you change 2 variables and each of these features three versions, there will be 9 combinations in the final ranking (number of variations of the first variable multiplied by the number of variations of the second variable).
Configure your goals
Just like a standard A/B test, you can place click tracking, % scroll plug-ins, etc. Your click tracking goals for each subtest must have exactly the same names in order to come up in the master test report.
3) The Targeting section enables you to configure targeting for each subtest. Make sure to select the same target pages as for step 2). In our example, targeting will be the same as for subtests 1 and 2. Use the drop-down list to check the targeting of each subtest.
4) Traffic allocation: we recommend leaving the default traffic allocation as it is.
Note that with a multivariate test, all users are tracked. Each % of targeted traffic will be assigned a variation and tracked in the report. Some subtests may include 3 variations while others only include 2.
5) 3rd party integration: no changes specific to multivariate tests.
When you run the test, the editor will create all possible combinations, i.e. one blue “Purchase” button, one red “Order” button, etc. No further configurations are needed.
Reports for a multivariate test offer both combined reporting and reporting for each subtest. In our example, we have created 2 subtests: this means 3 forms of reporting: 1 per subtest and 1 for the master test.
Each subtest will include a standard A/B test report.
Reading a master test report
The report will provide the following elements for each goal:
- Conversion rate: average conversion rate of the combination
- Reliability: determines the significance of the difference in performance for the combinations
- Bayesian: indicates the gain measurement
- Force: impact of the change on the overall conversion rate. If all forces are set to 1, this means the impact is divided evenly.
The various combinations are displayed as lists in the report. The winning combination is in first place. In the example below, the winning combination includes elements from the variation and the original version. This combination generated an average conversion rate of ****%. Each change had the same impact on the final result.
To find out how the original page performed, look for the combination that included the original version for each variation, as shown below:
Configuring a “page seen” type goal
Unlike for A/B tests, you need to configure the “page seen” type goals (or Standard URL goals) of an MVT test in the global code of your account (Settings > Global Code) using the following function:
ABTastyClickTracking('Name of Standard URL goal', null);
The analysis of an MVT test is more complex than that of a standard AB test. That is why we recommend configuring no more than 3 goals when starting an MVT test, otherwise you may struggle to read the results. Maintaining a simple and fast executing process guarantees a higher level of confidence and a speedier reiteration of optimization ideas.