Having the option to defining experiment success in terms of relative improvement & confidence interval
Feature request
Is your feature request related to a problem?
Having the option to defining experiment success in terms of relative improvement & confidence interval rather than MDE & statistical significance
The majority of our posthog experimentation is being done to support our iterations of our ML model that powers our home feed recommendations. When talking to the TPM of that team, he made it clear that when he runs these experiments what he really cares about is knowing what the winning version of the model is and what the confidence intervals are.
He doesn’t really care if it’s a 95% or 90% chance of beating the original. What he ultimately wants to do is resolve to the better performing model and get an understanding of the confidence interval of that performance.
This one is probably the fuzziest feedback and I don’t have good examples of what this could like (or other products that provide this already) but I would be interested in exploring and brainstorming!
Describe the solution you'd like
Describe alternatives you've considered
Additional context
Debug info
- [ ] PostHog Cloud, Debug information: [please copy/paste from https://us.posthog.com/settings/project-details#variables]
- [ ] PostHog Hobby self-hosted with `docker compose`, version/commit: [please provide]
- [ ] PostHog self-hosted with Kubernetes (deprecated, see [`Sunsetting Kubernetes support`](https://posthog.com/blog/sunsetting-helm-support-posthog)), version/commit: [please provide]