Skip to main content

A/B Test

An overview of how to do split tests, why and how to use the results to improve engagement and relevance.

Updated this week

A/B Test (Split test)

A/B testing lets you compare different versions of your email — content, design, subject lines, or send times — to see what generates higher engagement. Send the variants to smaller groups, study the results, and optionally let the platform send the winning version to the rest of your audience automatically.


In this article


How A/B testing works

  1. You build an email with multiple content versions (or choose to test send time instead).

  2. In the A/B test step, you define the splits — how your audience is divided into groups — and assign a content version (or send time) to each.

  3. The email is sent to the split groups. All Profiles in the sending are randomly assigned to a group.

  4. You study the results in the Email Report to see which version performed better.

  5. Optional (with winner): After a defined waiting period, the winning version is automatically sent to the remaining Profiles.


What can you test?

Test type

What varies between versions

Example

Email content/design

Different Rows are shown to different groups — e.g. different hero images, different CTAs, different copy

Version A has a pink CTA button; Version B has a cyan CTA button

Subject line

Each split can have a different subject line and preheader

Split 1: "Your September plan is ready" vs. Split 2: "3 things to try this month"

Sender name

Each split can have a different sender name

Split 1: "Your Brand" vs. Split 2: "Anna at Your Brand"

Send time

Same content, different delivery times

Split 1 sends at 08:00; Split 2 sends at 12:00

You can combine content, subject line, and sender name variations in a single test. Send time testing is a separate test type.

💡 You cannot use both segment and split test versions on the same rows

If you have applied a segment on a content row, the option to assign the row to a split test version is not available, and vice versa.


Test types: with and without winner

A/B test without winner

A/B test with winner

How it works

All splits are sent to the full audience at the scheduled time. You compare results in the report afterwards.

Splits are sent to test groups first. After a defined waiting period, the winning version is automatically sent to the remaining Profiles.

Best for

Smaller audiences, straightforward 50/50 comparisons, learning about your audience's preferences over time.

Larger audiences where you want to maximise performance on a single send. The test groups identify the best version; the rest of the audience gets the winner.

Audience

All Profiles receive one of the split versions.

Test groups receive the variants; the remaining group receives the winner.

Send time test

Available as an option.

Not available — send time tests use the "without winner" type.


Step-by-step: Create content versions

Content versions are created within a single email in the editor — not as separate emails. You define which Rows belong to which version, and unassigned Rows are shown to everyone.

  1. Design the email as you normally would. Include all content for all versions — for example, two different hero image Rows, or two different CTA button Rows.

  2. Select a Row that should only appear in Version A. Click the Row in the structure panel (left side).

  3. In the Row settings (right side), open A/B Test Versions.

  4. Type a version name (e.g. "Version A") and select it. This Row is now assigned to Version A.

  5. Repeat for Version B: Select the Row(s) for Version B, open A/B Test Versions, and type a different version name (e.g. "Version B").

  6. Use Preview to check how each version looks.

💡 Rows without a version = shown to everyone


Any Row that isn't assigned to a specific version will appear in all variants. This means your header, footer, and shared content blocks stay consistent — only the Rows you explicitly assign to a version will differ between splits.

When your versions are ready, click Next step, select your Subscription, and proceed to the A/B test setup.


Step-by-step: Set up the A/B test

  1. Select the test type from the dropdown (A/B test without winner, or A/B test with winner).

  2. Choose the number of splits (how many groups to divide your audience into). You can have up to five. You can easily adjust the size of the splits by simply moving the dot on the line to the percentage you want, then lock with the padlock on the left hand side.

  3. Click Calculate now to see how many Profiles match your settings.


  4. Use the percentage sliders to define the size of each split. Click the padlock icon to lock a value while adjusting others.

  5. For each split, set:

    • Subject line and preheader (can differ between splits)

    • Sender name (can differ between splits)

    • Content version — select which version each split receives
      If you have versions but do not assign them to a split, you get a pop up reminding you to ensure your versions are set, but you can also choose to ignore.

    • Google Analytics — set the UTM values per split

  6. If using A/B test with winner, click Define winner criteria (see below).

  7. Click Next step to set the schedule.

  8. Set the send time and confirm on the Overview page.

💡 Cancel split-sending


If you change your mind, click Cancel split-sending on the A/B test page. This removes the test setup and the email will be sent as a regular, single-version send.


Defining winner criteria

When using A/B test with winner, you define how the platform determines which split performed best:

Setting

What it controls

Winner criteria

Choose what metric determines the winner:

  • Number of total opens — the split with the most total opens wins.

  • Number of unique opens — the split with the most unique openers wins.

  • Number of total clicks — the split with the most total link clicks wins.

  • Number of unique clicks — the split with the most unique clickers wins.

  • Number of clicks on link group — the split with the most clicks on a specific Link Group wins. When selected, a "Select a link group" dropdown appears so you can choose which group to measure.

Select a link group

Only visible when the winner criteria is "Number of clicks on link group". Choose the Link Group to base the winner decision on. Link Groups are assigned to links in Link Settings.

If there is a draw

Select which split to send to the rest of your audience if the test results are too close to call.

Schedule the winner sendout

Set the number of hours to wait after the test splits are sent before determining the winner and sending to the remaining group. "The rest will be sent automatically when condition met."

Testing with Link Groups

Link Groups let you measure A/B test performance based on clicks on a specific set of links rather than all clicks in the email. This is especially powerful when your email contains multiple links but only one of them is the conversion action you care about.

How it works

  1. In the email editor, open Link Settings for the link(s) you want to track as a group.

  2. In the Link Group field, type a group name (e.g. "Main CTA" or "Product page"). Assign the same group name to every link you want included — across both A/B split versions.

  3. In the A/B test setup (A/B test step in the wizard), choose "Number of clicks on link group" as the winner criteria.


  4. Select the Link Group you created from the dropdown.

  5. The winner is determined by which split generated the most clicks on links in that specific group — ignoring clicks on all other links in the email.

When to use Link Groups for A/B testing

Use case

Why Link Groups help

Testing CTA wording or design

Your email has a navigation menu, social links, and a main CTA. You only care about which CTA version drives more clicks — not total clicks that include the menu. Assign only the CTA button to a Link Group and use it as the winner criteria.

Testing a hero image link vs a text link

Both versions link to the same product page but present it differently (image vs text). Assign both the hero image link and the product text link to the same Link Group. The winner is determined by combined clicks on those specific links, not on unrelated footer or social links.

Testing product placement

Split A features Product X in the hero spot, Split B features Product Y. Assign each product's links to a Link Group. Use "clicks on link group" to see which product placement generated more interest — even if overall email clicks are similar.

Newsletter with multiple sections

Your newsletter has five article links plus a CTA. You want to test which subject line drives more clicks on the CTA specifically, not on the articles. Assign only the CTA to a Link Group.

💡 Tip: Name your Link Groups descriptively

Use clear names like "Main CTA", "Product page links", or "Registration button" — you'll need to recognise them in the A/B test winner criteria dropdown. The Link Group name is also visible in the Link Settings panel, so your team will understand what it's for.


Testing send time

To test whether your audience responds better at different times, use the A/B test without winner type and select Send at different times. Each split should then receive the same content at a different time to ensure you only test send time. Note that you can combine Send time and Content/Subject line/Sender but be mindful when interpreting the results.

This is useful for answering questions like: "Do our subscribers engage more with emails sent in the morning or at lunchtime?"


Reading the A/B test report

After your A/B test has been sent, the report is available from the Scheduled & Sent tab on the Email tool start page. Select the email and click Report in the bottom bar.

The A/B test report has three tabs: Overview, Links, and Bounces.

The Overview tab

The top of the report shows a delivery summary for each split and the Rest group (the Profiles who received the winning version). Each split displays:

  • The split percentage (e.g. 10%, 10%, 80% for Rest)

  • Total Sent count

  • Delivered count and percentage

  • Bounced count and percentage

  • The winning split is labelled with a "Winner" badge

Winner details panel

On the right side, the Winner details panel shows how the winner was determined:

Field

What it shows

Winner condition

The metric used to pick the winner (e.g. "Number of total opens").

Allowed variations

The tie threshold you set (e.g. "Simple Majority" if set to 0%).

Time winner was picked

The date and time when the system determined the winner and sent to the Rest group.

Win or Draw

Whether the result was a clear Win or a Draw (tie).

Split Sent

Which split version was sent to the Rest group.

Email breakdown table

Below the delivery summary, the Email breakdown table compares all splits and the Rest group side by side:

Column

What it shows

Split

Split 1, Split 2, Rest (or more if you used more splits).

Opens

Total number of opens.

Opens (unique)

Number of unique Profiles who opened, with percentage.

Clicks

Total number of link clicks.

Clicks (unique)

Number of unique Profiles who clicked, with percentage.

Unsubscribes

Number of unsubscribes and percentage.

CTOR

Click-to-open rate — unique clicks divided by unique opens. This is the best measure of how well your content performed once someone opened the email.

Per-split detail

Below the breakdown table, each split has its own expandable section showing the same metrics as a standard Email Report: Opens (total and unique), Clicks (total and unique), Spam complaints, Unsubscribes, and CTOR. A Details section shows the email name, subject line, preheader, sender name, sender email, reply-to, and send time for that split.

Sending details panel

The right-side panel also shows the Sending to details — the Section, Folder, Subscription, import filter, duplicate exclusion settings, excluded Segments, and included Tags that were used for this send. This helps you verify the audience configuration when reviewing results.

Scheduled & Sent list view

On the Email start page, A/B test emails show as an expandable item in the Scheduled & Sent tab. Click the arrow to expand and see all splits listed individually with their Profiles count, Delivery rate, Open rate, Click rate, Status, and Scheduled/Sent date. This gives you a quick at-a-glance comparison without opening the full report.

How to use the results

Compare the splits on the metric that matters most to your goal:

  • Testing subject lines? Compare unique opens — the subject line's job is to get the email opened.

  • Testing content or CTA design? Compare CTOR — this measures how well the content performed once someone opened. A higher CTOR means the content drove more action.

  • Testing send time? Compare unique opens and unique clicks — both are affected by when the email arrives.

If a specific approach consistently outperforms across multiple tests, adopt it as your default. If results are close (within a few percentage points), the difference may not be meaningful — test again with a larger audience or a more distinct variation.


Tips for meaningful results

Tip

Why

Test one variable at a time

If you change the subject line, CTA colour, and hero image between versions, you won't know which change drove the difference. Isolate one variable per test for clear results.

Need at least 1,000 Profiles per split

With fewer than ~1,000 Profiles per group, the sample size is too small to draw reliable conclusions. You can still run the test, but treat the results as directional, not definitive.

Allow enough time for the winner test

If you're using "A/B test with winner", set a wait time of at least 2–4 hours. Most opens happen within the first few hours — a 30-minute window is too short to capture meaningful data.

Document your learnings

Keep a simple log of what you tested and what won. Over time, this builds a picture of what your specific audience responds to best.

Don't over-test

A/B testing is most valuable when used intentionally. Testing every send dilutes the effort. Focus on testing when you have a genuine question about what will work better.


Troubleshooting

Issue

Cause and solution

"I don't see the A/B test option"

The A/B test setup appears after you select a Subscription in the Send To step. You need to have content versions defined in the editor first (for design tests), or select the test type after choosing your audience.

"Profiles added after scheduling aren't in the test"

By design: Profiles must exist in the account before the email is scheduled to be included in the A/B test.

"My content versions look the same in Preview"

Check the Row settings — make sure the Rows for each version are assigned to different version names. Use the Preview's version selector to switch between them.

"The winner wasn't sent"

Check that you set a wait time and that enough time has passed. If the results were a tie and no tie-breaker version was selected, the system may not have a clear winner to send.

"I want to cancel the A/B test"

Click Cancel split-sending on the A/B test page. The email reverts to a regular single-version send.


What's next?

  1. Email Report — Analyse your A/B test results in detail.

  2. Data tags and Segments — Combine A/B testing with audience targeting for even more relevant sends.

  3. Email – Best Practice — Apply what you learn from testing to your design and content strategy.

Did this answer your question?