Skip to content
Experiments

Create an Experiment

Set up an A/B test with variants and traffic allocation.

Create an Experiment

This guide walks you through setting up a new experiment from start to finish.

Prerequisites

  • Growth plan or higher (experiments are not available on Starter).
  • Two or more published flows you want to compare. Each variant in an experiment points to an existing flow, so build every variant as its own flow first.

Before creating an experiment, make sure you have built all the flows you want to test. There is no "edit variant" feature inside the experiment -- each variant must already exist as a flow.

Step 1: Open the experiment creator

  1. Navigate to Experiments in the sidebar.
  2. Click Create Experiment.
  3. Give the experiment a descriptive name (for example, "Onboarding CTA test - May 2026") and an optional description.

Step 2: Add variants

Each variant maps to a flow you have already published. By default, the creator starts with two variants named "Control" and "Variant B".

For each variant:

  1. Choose a name (for example, "Control" and "Variant B").
  2. Select the flow that variant should use.
  3. Set the traffic percentage.

You can add more variants if you want to compare more than two flows. Keep in mind that more variants require more traffic to reach statistically meaningful results.

Step 3: Define traffic split

Set the percentage of eligible users assigned to each variant. The percentages must add up to 100%.

SplitWhen to use
50/50Standard A/B test with one variant. Fastest time to statistical significance.
33/33/34Three-way test. Requires more traffic for meaningful results.
80/20When you want to limit exposure to an experimental variant. Slower to reach significance.

Choose your traffic split before starting the experiment. Changing the split mid-experiment invalidates your results because the sample populations are no longer comparable.

Step 4: Pick the primary metric

Choose one metric as the primary success indicator:

  • Completion rate -- best for onboarding flows or multi-step experiences where you want users to finish.
  • Conversion rate -- best when you have a specific goal event (like a purchase or feature activation) tied to the flow.
  • Dismiss rate -- useful for testing whether a variant is less intrusive or annoying.

Only one primary metric is tracked per experiment, and it drives the lift calculation and winner recommendation.

Step 5: Configure confidence and sample size

Two additional settings control when the experiment is considered "ready" to call a winner:

  • Confidence level -- the statistical confidence required (default: 95%). Higher confidence means more certainty but needs more data.
  • Minimum sample size -- the minimum number of users per variant before results are considered (default: 100).

Leave the defaults unless you have a reason to change them.

Step 6: Review and save

Review the configuration:

  • Experiment name and description
  • Variants and their flow assignments
  • Traffic split percentages
  • Primary metric
  • Confidence level and minimum sample size

Click Save to create the experiment in Draft status. The experiment does not start collecting data until you explicitly start it.

Next steps

Once your experiment is saved, head to Run & Monitor to start the test and track results.

On this page