FAQ traffic allocation

What happens when I deactivate a campaign variation at the traffic allocation step?

When you deactivate a campaign variation at the traffic allocation step – meaning that you set the traffic to 0% – you might still see this variation online or see data coming through the reporting for this variation. 

Note that modulation changes on a live campaign only apply to new visitors (those who see the campaign for the first time). A visitor who sees a variation always sees the same variation during their current and following sessions. As the allocation information is stored in the cookies, variations that are deactivated from the traffic allocation step are still visible for returning visitors.

If you have previously been assigned the variation in question and want to make sure it is deactivated (not visible anymore), you must delete your cookies or open a new incognito window (with all other windows closed) to be considered a new visitor.

However, we don’t recommend changing the traffic allocation of a live campaign. If you want to do this, you should pause the current campaign and duplicate it with the desired traffic allocation.

Why does traffic modulation seem inconsistent with my settings?

The reporting may show differences between the percentage of visitors assigned to each variation. For example, you’ve defined a 50/50 traffic modulation and your report shows a different split.

These differences may be due to there not being enough traffic on the campaign. When a visitor lands on the website for the first time, they are randomly assigned a variation (or the original version). If they visit the website again, they will remain assigned to the same variation. However, if there is very little traffic on the website, its distribution between the variations may not be even.

For example, out of 10 visitors, seven could be assigned to variation 1 and three to variation 2. With 100 visitors, you may have 60 visitors who will see variation 1 and 40 who will see variation 2. The more visitors visit the website, the more you will tend towards a 50/50 distribution. 

You may see a difference in the distribution of up to 12.4% with 1,000 visitors; 4.32% with 10,000 visitors; and 1.1% with 100,000 visitors. The difference drops to just 0.37% when you hit a million visitors.

Traffic allocation was changed after the campaign launched

You can check if traffic allocation was changed via the campaign history tab in your campaign reporting.

If this is the case, it is more likely that traffic modulation won’t be respected because the visitors who had already been assigned to a variation (or to the original version) won’t be impacted by this change. As they already have the information stored in the AB Tasty cookie of their browser, they will always see the same version, meaning that traffic modulation changes will only apply to new visitors. 

For example, if your campaign was originally 50/50 and you change it to 70/30, there’s a chance your campaign will never actually reach 70/30.

Why does the number of visitors distribution on the report not match the "Traffic Allocation" setting?

The golden rule of AB Tasty is that the visitors assigned to a specific variation will still be assigned to this same variation until the campaign is paused or deleted (or until the cookie expiration date is reached).

The consequence is that any traffic allocation configuration will have an impact on the next one.

So if the traffic allocation has been changed multiple times during a campaign lifespan, the real allocation in the report may be far from the last one set.

This one may not be reached (or after an extremely long time).

I set my traffic allocation to 0, why is there still data in my report?

It may seem weird, but if the traffic is set to 0 (after having been set differently), we can have conversions on the original due to the old visitors that are still assigned to the original.

The traffic was set to 50/50 once (according to the logs) but the result in the report is not balanced. How is it possible?

The campaign was originally duplicated from a test with a traffic allocation not set to 50/50 (example: duplicated from a test with 20/80).

When and why using dynamic allocation?

There is two mains reasons : 

  • The traffic is so low that there is nearly no hope to reach “statistical significance”.
  • The product that needs to be optimized has a too short life cycle (examples : Flash sales, news article headline, …). In this case a classic AB test would give its result too late, either the product does  not exist anymore or only gets marginal traffic.

So when using dynamic allocation, unlike an AB test, no real decision will be made, instead of a decision (declaring a winner) the test will only change allocation :  the better a conversion performs, the more traffic share it will receive. Doing so does statistical optimisation. We cannot be sure of each specific visitor allocation, but considering the experience as a whole, visitors will be statistically more oriented toward good or best performing variations. In fact dynamic allocation does not declare a winner in the classical meaning, especially because even the worst variation will still receive a little traffic, just enough to ensure that this variation has been tried enough.

All the intelligence in dynamic allocation is in the formulas that choose the best compromise with the limited information it has at each time it updates the allocation scheme.

How does dynamic allocation tests end?

You can decide to end it depending on business constraints, for instance for flash sales or news headlines, the business can certainly give guidelines.

You can also decide from the report page. If one variation has a high “chance to win” according to your settings, then you can consider that there is no more usefulness to drive traffic to the other variations.

To set that threshold you should understand the fundamental meaning of “significance”. Especially to the fact that the classic AB test suggests using a threshold on statistics to declare a winner. A classic AB test has an original variation sometimes called the reference, this variation already exists and is trusted, where other variations are often mockups that were never tried before. The trust we have in the original comes from the fact that it has historical data that are considered as ok (perfectible but ok). That’s why in a classic AB test, even if B has a little more conversions than A (the reference), we will choose A as a winner. It’s only when the amount of extra conversions is above a given threshold (asserted by a statistical test) that B will be considered a winner. This margin is here to ensure that B is not winning “by chance” against the trusted original.

So now if you are using dynamic allocation this means that you have low traffic or at least that the original didn’t go through a strong history of data. So in this context one can use a lower threshold (like 90%).

We can even go further. If all your variations are new, for instance if the flash sales haven’t started yet, or if you are creating several headlines of an article before publishing it, then you do not need to reach statistical significance at all. If you have to make a decision at a given time, then it is totally ok to pick the best variant even if it's only slightly the best.

Technical properties : 

  • Dynamic allocation is updated every 10 minutes.
  • It is very rare to have a really stable KPI, so we applied a “slow down” technique in order to adapt the raw statistical technique to reality. So in practice this algorithm will never totally shut down a variation’s traffic. So in practice, if you think that the winner should not chance, then a dynamic allocation test should be stopped at some point.

Was this article helpful?

/