A/B Testing
Optimize your journeys by testing message variations, analyzing performance, and automatically delivering the best content to each audience segment.
A/B testing in Journeys helps you find which message variation performs best so that majority of your audience receives the best-performing content. You can test different versions of a message within a Send Message step—such as changing the subject line or content—and automatically pick the winner based on performance.
What You Can Do
With A/B testing in a Send Message step, you can:
Create multiple message variations (A, B, C, etc. upto 8 variations)
Define traffic distribution between variations
Choose a winning metric (like opens, clicks, or conversions)
Set a time limit for the test to run (default: 365 days)
Use AI-based optimization to automatically shift traffic to better-performing variations
View detailed results and winner metrics in reports
Manually end the test or select a winner anytime
Setting Up an A/B Test
Add a Send Message Step
In your Journey builder:
Drag a Send Message step onto the canvas.
Select the message channel and content (Email, Push, SMS). Note that it is currently not supported for WhatsApp.
Click Create A/B Test inside the step.

Add Variations
You’ll start with Variation A and B when you create an A/B test.
When you create an A/B Test it will be in Draft status.
To remove A/B Test, simply click on the Delete icon.

To add more:
Click Duplicate on any variation to C, D and so on.

You can click Edit template to update the content
Each variation can be edited to have:
Different subject lines or headers
Different message body or creatives
Different CTA wording or tone
You can remove a variation after 2 variations using Remove option
Tip: Test one key element at a time (e.g., subject line or CTA) for clearer results.
Configure A/B Test Settings
Once you’ve added variations, set up how the test will run.
Click on A/B Test Settings to navigate to the settings page

Test Name
You can give a name to your test so that you can uniquely identify it from any other tests you may add it in other message steps in the same journey.
Sending Distribution
Define what percentage of your audience should receive each variation to begin with. By default, it is equally distributed between all variations. Example:
A → 50%
B → 50%
The system will automatically adjust the distribution to the better-performing variation after initial data is analyzed.
Winning Metric
Choose the performance metric that decides the winner:
Open Rate
Click Rate (Default)
Conversion Rate
Automatic Winner Selection
The test will run till the selected winning metric reaches statistical significance to determine a winner with sufficient confidence or the defined time duration limit whichever comes earlier.
Time Duration
Set how long the test should run before it ends with or without winner determination. By default, it is set as 365 days.
If no clear winner is determined within this time, you can manually select one.
Your changes to test settings will be automatically saved. You can also edit or duplicate to add more variations in this page
Additionally, you can preview your content by clicking on the preview eye icon.

Publishing a Test
When you are done with A/B Test settings, you can then publish the journey which would publish any draft tests created in the journey
Once published, the test will show In Progress status
You can click on View report to see the performance details for the test

You can manually end the test at any time, by clicking on A/B Test Settings > End Test

AI-Powered Optimization
Journeys uses an AI-driven multi-armed bandit algorithm to automatically optimize traffic distribution among variations in real time.
Instead of splitting audience traffic evenly for the entire test duration, the algorithm continuously shifts more traffic to the better-performing variations based on the chosen winning metric (e.g., click rate).
Here’s how it works:
When AI Optimization Begins
AI optimization starts after each variation has at least 50 delivered messages.
The test must have been running for at least 2 days to ensure there’s enough data for confident evaluation.
How Optimization Works
Once activated, the algorithm dynamically updates traffic allocation—sending a higher share of new audience members to variations showing stronger performance.
This approach allows your campaign to maximize engagement even while testing is in progress.
When the Test Ends Automatically
The AI continuously monitors performance.
The test ends automatically when:
The algorithm determines that no further optimization is possible, and
A clear winner is identified based on the confidence level of performance differences.
The time duration limit is reached.
If the algorithm finds that performance differences are not statistically significant, no automatic winner will be chosen.
The algorithm needs at least 95% confidence in 100 iterations of traffic distribution changes. It currently evaluates the performance and performs adjustments to distribution every 30 mins.
The maximum percentage change of traffic for each variation can be 20%. When the experiment reaches a stage where the traffic distribution cannot be more than 1% and the confidence reaches 95%, the test is completed.
Selecting a Winner
You can manually end a test at any time by selecting “End Test and Pick Winner.”
There are three possible outcomes:
Winner Automatically Selected by AI
The AI identifies a clear top performer and assigns the remaining traffic to that variation.
The test ends automatically, and all future contacts entering this step receive the winning version.
Manual Winner Selection
If the test ends without sufficient confidence or the test is still running, you can manually review performance metrics and choose a winner.

No Winner Determined
In cases where the data is inconclusive (low volume, small differences, or short duration), no winner will be automatically picked.
You can still manually select a variation based on observed performance.

Viewing Results
All test metrics and outcomes are available in the A/B Test tab on the Journey Report page.
Here you can see:
Each variation’s performance based on winning metric
Number of sent and deliveries per variation
Variation performance changes over time (for AI-optimized tests)
Selected or auto-selected winner

You can also view the metrics filtered by variation in the Journey Insights view by opening a Send Message step.

Best Practices
Test only one element at a time for clear insights.
Allow at least 2–3 days for sufficient data before evaluating results.
Ensure each variation has enough recipients (≥50) to enable AI optimization.
Don’t end the test too early — confidence builds over time.
Use results to refine future message content and improve journey performance.
Last updated
Was this helpful?

