Interpret Results
Analyze experiment results and decide what to ship.
Interpret Results
After completing an experiment, the results page gives you the data you need to decide which flow to ship going forward.
Understanding the results
The completed experiment view shows the final metrics for every variant:
- Primary metric -- the metric you selected when creating the experiment (completion rate, conversion rate, or dismiss rate).
- Lift over control -- the percentage difference between each variant and the control. A positive lift on completion/conversion rate means the variant outperformed the control. A negative lift on dismiss rate means fewer users dismissed the flow.
- Sample size -- the number of unique users exposed to each variant.
- Confidence -- the statistical confidence for the observed result.
Winner badge
When the results show a clear leader on the primary metric and the sample size is adequate, Setgreet displays a Winner badge on the best-performing variant. If no variant has a clear advantage, no badge is shown -- in that case, the variants performed similarly and either can be used.
The Winner badge is a recommendation, not an automated action. Setgreet does not change your flows based on the result. You always decide which flow to ship.
Deciding what to ship
Each variant in an experiment is just a flow you already built, so applying the result is a manual decision:
- If a variant wins -- you can leave its flow as-is in your account and, if needed, unpublish or archive the losing flows to avoid confusion.
- If the control wins -- no action is needed. The control flow is already live.
- If results are inconclusive -- either flow is acceptable. Consider running a follow-up experiment with a larger sample or a more distinct variant.
Best practices for interpreting results
- Do not end experiments early. If a variant looks like a winner after one day, resist the urge to stop. Early results are noisy and can reverse with more data.
- Wait for the minimum sample size. Results below the configured sample threshold are not reliable.
- Account for external factors. If you ran a marketing campaign during the experiment, it may have skewed results toward users who behave differently from your typical audience.
- Document your learnings. Note what worked and why in the experiment description. These insights compound over time and inform future flow design.
Running follow-up experiments
Good experimentation is iterative. After deciding on a winner:
- Hypothesize why it won (clearer CTA? fewer screens? better imagery?).
- Build a new flow that pushes the winning element further.
- Run a new experiment comparing the current winner to the new variant.
Small, compounding improvements to your flows often deliver better results than occasional large redesigns.