mojito
mojito copied to clipboard
Multi arm bandit or A/B testing?
I was curious if this was a feature set that is either supported, or planned to be supported in the future.
@robertleeplummerjr - yes, we built it as a proof of concept on a private repo of Mojito JS Delivery. We aren't really using MAB tests for our clients because we find it's easier for us to report on straight traffic splits and check for SRM. But if it's of interest to the community, we'd be happy to build this out a bit further and share how it works.
At a high-level, our proof of concept works by sharing the test results with mojito-js-delivery
's CI Pipeline. E.g.:
- Publish the test @ 50-50 split
- Collect data & publish the results / MAB split via JSON (e.g.
{ 'a': 0.23782, 'b': 0.76218}
) - The CI pipeline runs regularly, and sets the splits according to the results / MAB split
- The CI pipeline publishes the new traffic splits
The complexity lies in:
- Providing a standard API between XYZ storage target (e.g. are we using Snowplow/GA/something else) and the CI pipeline
- Cookie and persistence implications when using hash-based assignment and ITP protections. But for MAB, maybe it's not as bad that some users could get re-assigned to the better-performing treatment.
It's not an out-of-the-box solution either. You'd need to have the chops to expose your tests results to Mojito's CI pipeline. Though it only took us a few hours to script this up (with intimate knowledge):

I feel compelled to point out that the stationarity assumption, which is required by many MAB algorithms (such as the popular Thompson sampling bandit), hardly ever holds in online practice. Violation of this assumption makes the bandit susceptible to Simpson's paradox, resulting in the bandit converging on suboptimal arms in practice more often than the theory suggests.
Oh, and technically speaking, AB testing is already a solution to the MAB problem. :-D
It's called "epsilon first". We first spend a fixed amount of time/sample (i.e. epsilon) fully exploring (e.g. randomly sampling arms) and then commit to fully exploiting the most likely optimal arm for the remainder of the sample (e.g. we ship the winning variation). In cases where the sample is not limited (e.g. there is no fixed limit or budget to the sample) this solution can actually be quite efficient.