Split Node
The Split node divides profiles into 2, 3, or 4 different paths based on percentages. The distribution is random — it doesn't look at any profile data. It's the primary tool for A/B testing and traffic splitting in Marketing Automation flows.
In this article
When to use the Split node
Use the Split node when you want to distribute profiles randomly across different paths. The node doesn't evaluate any data — it simply assigns each incoming profile to a path based on the percentages you define.
This makes it ideal for:
A/B testing — compare different email subject lines, content, or offers to see which performs better.
Reference group - Exclude 10% from the flow and compare whether you make a difference at all with the flow activities.
Multi-variant testing — test 3 or 4 different approaches simultaneously.
Gradual rollout — send a new message to a small percentage first, then expand if results are good.
Channel testing — send one group an email, another an SMS, and compare engagement.
💡 Good to know — Random means random
The Split node uses random distribution. You can't control which profiles go down which path — only the percentage of profiles per path. If you need to route profiles based on their data (attributes, tags, segments), use the Check Profile node instead.
Common use cases
A/B test email subject lines
Split node (50/50) → Path A: Email with subject "Your exclusive offer inside" → Path B: Email with subject "Don't miss out — 24 hours left". Compare open rates in the email reports to determine the winner.
Test email vs. SMS
Split node (50/50) → Path A: Email node → Path B: SMS node. Compare engagement (clicks, conversions) to determine which channel performs better for this audience and message type.
Multi-variant content test
Split node (33/33/34, 3 paths) → Path A: Product-focused email → Path B: Story-driven email → Path C: Testimonial email. Compare click-through rates across all three to find the best content angle.
Gradual rollout of a new message
Split node (10/90, custom) → Path A (10%): New experimental email → Path B (90%): Proven existing email. Monitor Path A's performance. If it outperforms, increase the percentage in a future iteration.
Test different wait times
Split node (50/50) → Path A: Time node (2 days) → Email → Path B: Time node (5 days) → Email. Compare engagement to find the optimal spacing for your nurture sequence.
Even Split vs. Custom Split
Even Split | Custom Split |
Divides profiles equally across all paths. For example: 2 paths = 50/50. 3 paths = 33/33/34. 4 paths = 25/25/25/25. | You define the percentage for each path. For example: 10/90 (gradual rollout), 70/20/10 (primary + two test variants), or any combination that adds up to 100%. |
Best for: straightforward A/B tests where each variant should receive equal traffic. | Best for: gradual rollouts, weighted tests where you want most traffic on the proven approach, or unequal testing scenarios. |
Choose 2, 3, or 4 paths. | Choose 2, 3, or 4 paths and set percentages for each. Must total 100%. |
Setting up: Even Split
Drag the Split node onto the canvas and click it to open the configuration panel.
Select "Even Split".
Choose the number of paths: 2, 3, or 4. The paths appear on the canvas immediately.
Connect nodes to each path. Each path needs at least one subsequent node.
Setting up: Custom Split
Click the Split node and select "Custom Split".
Choose the number of paths: 2, 3, or 4.
Set the percentage for each path. Enter the percentage of profiles that should go down each path. The percentages must add up to 100%. Use the padlock to lock your settings when done.
Connect nodes to each path and build out each variant.
💡 Tip — Label your paths
The canvas doesn't show labels on Split paths by default. Use the Goals feature in the bottom bar to document what each path represents — for example: "Path A = Short subject line, Path B = Long subject line, Path C = Emoji subject line". This helps colleagues (and future you) understand the test without clicking into every node.
Using the Split node for A/B testing
The Split node is Marketing Automation's built-in A/B testing mechanism. Here's a step-by-step approach to running a meaningful test:
Step 1: Define what you're testing
Change one variable at a time to get clear results. Common variables to test:
Variable | What to compare |
Subject line | Two different subject lines on the same email content. Compare open rates. |
Email content | Same subject line, different body content (e.g. product-focused vs. story-driven). Compare click-through rates. |
Call to action | Same email, different CTA text or button colour. Compare click rates on the CTA. |
Send timing | Same email, different Time node delays before sending. Compare open and click rates. |
Channel | One path sends email, another sends SMS. Compare conversions or downstream actions. |
Offer | Different discount levels or value propositions. Compare conversion rates. |
Step 2: Build the flow
Place the Split node at the point where the variants diverge. Keep everything before the Split node identical for both groups — so the only difference is the variable you're testing.
Step 3: Let it run and compare
After enough profiles have passed through (see volume guidance below), compare the results:
Click each Email node on the canvas to view its email report (opens, clicks, bounces).
Use the Marketing Automation Report for a broader flow-level view.
Compare the metric that matches your test variable (open rate for subject lines, click rate for content, conversion for offers).
Step 4: Act on results
Once you have a clear winner, you can either:
Remove the Split node and keep only the winning path for all future profiles.
Adjust the Custom Split to send the majority of traffic to the winner (e.g. 90/10) while continuing to test the alternative at low volume.
How many profiles do you need?
A/B tests need enough profiles to produce meaningful results. If too few profiles pass through, the difference between variants may be due to chance rather than the variable you're testing.
Metric you're comparing | Rough minimum per path |
Open rate (subject line test) | At least 200–300 profiles per path to see a statistically meaningful difference in open rates (assuming typical open rates of 15–30%). |
Click rate (content/CTA test) | At least 500–1,000 profiles per path. Click rates are typically lower (2–10%), so you need more volume to detect differences. |
Conversion (offer/channel test) | At least 1,000+ profiles per path. Conversion rates are often very low (1–5%), requiring large samples. |
💡 Good to know
These are rough guidelines, not exact thresholds. The key principle: the smaller the expected difference between variants, the more profiles you need. If your flow only processes 50 profiles per month, A/B testing may not produce reliable results — consider testing in a higher-volume flow or via standalone email campaigns with the Email tool's built-in A/B testing instead.
Split vs. Check Profile — what's the difference?
Both nodes create branching paths, but they work very differently:
Node | How it routes | Use when… |
Split | Random distribution by percentage. Doesn't look at any profile data. | You want to test different approaches with randomly assigned groups. Every group should be comparable — no bias. |
Data-driven routing based on attributes, tags, events, or segments. Yes/No paths. | You want to personalise the journey based on who the profile is or what they've done. Routing is deterministic, not random. |
💡 Example — When to use which
"I want to test whether subject line A or B gets more opens" →Split node (random, unbiased test).
"I want Swedish customers to receive Swedish content" →Check Profile node (data-driven routing by Country attribute).
"I want to test whether email or SMS works better for re-engagement" →Split node (random channel assignment).
"I want VIP customers to get a premium offer" →Check Profile node (data-driven routing by VIP tag).
Troubleshooting: Not getting enough results?
Check this | Why it matters |
Is the sample size too small? | With very few profiles (e.g. 3–5), the random distribution may appear skewed. With a 50/50 split and 4 profiles, it's entirely possible for 3 or even all 4 to go down the same path by chance. Wait for more profiles to pass through — the distribution evens out over larger numbers. |
Is the Split node correctly configured? | Click the node and verify the split type (Even or Custom) and the number of paths. If you intended 3 paths but only configured 2, one variant is missing entirely. |
Are all paths correctly connected to subsequent nodes? | If a path has the wrong subsequent node connected, profiles assigned to that path may end up in a dead end or not progress as expected. Make sure every path has at least one node after it. |
Use Node Stats to verify distribution | Click the Split node on an active flow. Check the Node Stats to see how many profiles went down each path. The actual distribution should approach the configured percentages as more profiles pass through. |
Results look the same across variants?
Check this | Why it matters |
Did you only change one variable? | If you changed both the subject line AND the content, you won't know which caused the difference (or lack of it). Test one variable at a time. |
Is the difference you're testing big enough? | Subtle differences (e.g. changing one word in a subject line) may not produce a measurable difference. Test bolder changes — different angles, tones, or offers. |
Do you have enough profiles? | See volume guidance above. With too few profiles, real differences can be masked by statistical noise. |
Tips & best practices
Test one variable at a time. If you change the subject line, keep the email content identical. If you change the content, keep the subject the same. This is the only way to know what caused the difference in results.
Use Even Split for most A/B tests. Equal distribution eliminates bias and makes comparison straightforward. Only use Custom Split when you have a specific reason (gradual rollout, weighted testing).
Let the test run long enough. Don't declare a winner after 20 profiles. Wait until each path has received the minimum number of profiles for the metric you're testing (see volume guidance).
Compare the right metric. Subject line test → compare open rates. Content test → compare click rates. Offer test → compare conversions. Don't judge a subject line test by click rates — the subject line only affects whether the email is opened.
Document your tests. Use the Goals feature in the bottom bar to note what you're testing, when the test started, and what the expected outcome is. This makes it easy to review results later and share learnings with your team.
Don't forget End Flow nodes. Each path from the Split node should eventually lead to an End Flow node. This keeps the flow clean and ensures accurate completion reporting.
Consider merging paths after the test. If you only want to test one step (e.g. the subject line) but the rest of the journey is identical, you can reconnect the paths after the test variant — both paths feed into the same next node (e.g. a shared Wait for Event or Time node).
Use Custom Split for gradual rollouts. Start with 10% on the new variant and 90% on the proven one. If the 10% performs well, adjust to 50/50, then eventually 100% on the new approach.
Related articles
Marketing Automation Nodes — Overview of every node type.
Check Profile Node — Data-driven routing (vs. random distribution).
Email Node — Create different email variants for each path.
SMS Node — Test email vs. SMS channel effectiveness.
Time Node — Test different timing between messages.
Counter Node — Limit how many profiles go down a specific path (for volume control).
Marketing Automation Report — Compare performance across flow paths.
Navigate the Canvas — Goals feature for documenting tests.
Key Terms Glossary — Definitions for all Marketing Automation terms.







