A/B tests
Not sure which subject line will lead to the best click-through rate? Trying to decide the most compelling CTA copy? Need some data to back up a discussion about the best hero image?
A/B testing takes the guesswork out of sending campaigns.
How A/B testing works
Once you've decided what you want to test, and how you'll measure the success of your campaign, you'll add your content and establish the parameters of the test.
Winning metric
The winning metric is what we measure to determine the 'winning' variant. You'll select this during the configuration stage. The options available to you will depend on the channel you're using to run your test.
Click-through rate: The percentage of recipients who open your message and click at least one link inside.
Read rate: (WhatsApp only): The percentage of recipients who open and read your message.
Open rate: (Email only): The percentage of recipients who open your message.
The 'winning' variant will be the variant with the best result for whichever metric you select.
Test size
You must have 100 recipients in a list or audience to be able to send them an A/B tested campaign.
During the configuration stage, you'll choose what percentage of your recipients you want to test your variants on.
The minimum size for your test list is 100 recipients, but remember that the larger your test size, the more accurate your results will be.
This is especially important if you're comparing more than two variants. Eight variants tested on 10% of a 100-recipient list won't lead to very statistically significant results!
We recommend that you test your campaign variants on a minimum of 10% of your recipients.
Test duration
By default, the test duration is set to two hours. You can increase this as required.
The longer you test your variants for, the more accurate your A/B test results will be. However, it's important to remember that the length of test window will affect the time that the winning variation is sent out, and factor this in when testing your campaign.
For example, if you know your best sending time is 11 AM on Thursdays, you might decide to A/B test your campaign for 24 hours at 10 AM on Wednesday, so that the winning variant is sent at the optimal time.
Winner fallback
Sometimes A/B tests simply aren't conclusive. This can happen for a few reasons:
There's not enough data for the test to be conclusive. For example, if you selected the 'Click-through rate' metric, but nobody clicked through on any of your variants. This can happen if the test duration was too short, or the testing sample size was too small.
Negligible differences (less than 1%) in the results. For example, if you selected the 'Click-through rate' metric, and your first variant achieved an 8.1% click-through rate, while your second variant achieved an 8.7% click-through rate.
During the configuration stage, you'll choose what you want to happen if your test is inconclusive:
Manually select the fallback winner: This option allows you to choose which variant you'd prefer to send.
Send all test variations equally to remainder recipients: This option allows us to distribute the variants from the test equally among the remaining recipients.
Last updated