Experiments
Run A/B tests on your flows to find what works best.
Experiments
Experiments let you A/B test your flows to find the version that drives the best results. Instead of guessing which copy, layout, or structure works best, you can run a controlled test and let the data decide.
How experiments work
An experiment splits eligible users into groups. Each group sees a different flow that you have already built, and Setgreet tracks a chosen metric for each one. You decide which flow performed best based on the results.
The basic structure:
- Control -- a flow you have already created (typically the current version).
- Variant(s) -- other flows you have already created that represent alternative approaches.
- Traffic split -- the percentage of eligible users assigned to each variant.
The SDK handles variant assignment automatically. Each user is consistently assigned to the same variant for the duration of the experiment, so their experience is stable.
Variants are existing flows, not copies of the control. Build each variant as a separate flow in the flow builder before creating the experiment, then select them when configuring the experiment.
Experiment lifecycle
| Status | Description |
|---|---|
| Draft | The experiment is configured but not running. You can still edit variants and traffic splits. |
| Running | The experiment is live. Users are being assigned to variants and metrics are being collected. |
| Paused | The experiment is temporarily stopped. No new users are assigned, but existing data is preserved. |
| Completed | The experiment has ended. Results are final and you can mark a winning variant. |
What you can test
Experiments work at the flow level. You can test any difference between flow variants:
- Different copy or messaging
- Different screen counts or ordering
- Different component layouts
- Different images or media
- Different call-to-action buttons
Available metrics
When creating an experiment, you choose one primary metric to measure success:
| Metric | Description |
|---|---|
| Completion rate | Percentage of users who reached the final screen of the flow. |
| Conversion rate | Percentage of users who triggered a defined conversion goal. |
| Dismiss rate | Percentage of users who closed the flow before completing it. |
You can only pick one primary metric per experiment. It is the basis for calculating lift over control and for the winner recommendation.
Experiments require the Growth plan or higher. The Starter plan does not include A/B testing.
Next steps
- Create an Experiment -- set up variants and traffic allocation.
- Run & Monitor -- track performance in real time.
- Interpret Results -- analyze outcomes and decide what to ship.