Sequential Testing Alerts offers a sensitivity threshold to detect underperforming variations early and trigger alerts or pause experiments. It enables you to make data-driven decisions swiftly and optimize user experiences effectively. Sequential Testing saves time and resources by stopping experiments that are unlikely to yield positive results.
This alerting system is based on the percentage of conversions lost per variations.
Like any alerting system you need to set a sensitivity to trigger the alarm. And like any alerting system when we speak about sensitivity you need to make a tradeoff between two extreme cases: either you have a high sensitivity but a lot of alerts, so you might not see useful alarms among false alarms; or low sensitivity but some problems may be missed, or detected later.
Please refer to our Advanced Option experimentation setup in order to configure the Sequential Testing Alerts feature.
#️⃣ Takeaway N° 1
Each service/website/application is unique so you might launch several experiments with different levels of sensitivity before finding the right one adapted to your business. The lower that level will be triggered, the higher is the chance that there is a problem (and that we should raise an alarm)
#️⃣ Takeaway N° 2
- if too many false alarm are raised, then lower the sensitivity
- If alarms are missed, then raise it
But what are sensitivity levels?
Sensitivity levels will compare the variations with the reference and trigger an alert if one of the sensitivity levels is reached.
They are based on the number of false alarms you would get if all your experiments were in fact neutral tests.
- The High sensitivity level will raise a maximum of alerts, and could even detect neutral experiments. Enables you to improve your ressources by allocating the traffic to the experiments that are more worthwhile.
- The Low sensitivity level will detect the most impactful loss but you’ll not avoid losing traffic as it could be detected later than the High one.
- The Balanced sensitivity level is a compromise between the Strong and the Low.
Edge case where alerting shouldn’t be enabled
At this point of the article, you know that A/A experiment are edge cases, then there is two cases where alerting may not be a good idea or should be interpreted with care :
- When doing A/A test , obviously this will trigger the alarm uselessly
- When doing a “neutrality test” : testing the value of a feature by hiding it. In this case you will hope for neutrality allowing you to get rid of a specific feature. There is a high risk of “false alarm”, but it could also be interesting to know if the business is harmed. In this specific case the alarm should be set quite low, and/or be warned that there might be some false alarms…
Sequential Testing Alerts will have the following benefits:
- Detects harmful variations as soon as possible, protecting your business.
- Detects useless variations as soon as possible, helping you to get the best out of your website/app traffic.
- Free yourself from daily experimentation checks