Product Management 9 min read

Understanding A/B Testing and Gradual Release with Didi’s Apollo Platform

Didi’s Apollo platform combines A/B testing with gradual (gray) release, letting product teams safely roll out new features to targeted user segments, monitor key metrics, and apply best‑practice guidelines—such as isolating variables, pre‑defining metrics, controlling duration, random grouping, and confidence analysis—to achieve statistically significant, data‑driven improvements across thousands of weekly releases.

Didi Tech
Didi Tech
Didi Tech
Understanding A/B Testing and Gradual Release with Didi’s Apollo Platform

Didi’s Effectiveness Platform Department regularly shares case studies, research, tools, and processes for improving organizational efficiency. One of the key techniques highlighted is A/B testing combined with gradual release, which helps large‑scale internet products evaluate new features safely and improve user experience.

What is A/B testing? Before a new version is launched, two or more variants of a target feature are created and users are randomly split into groups that experience each variant. By comparing metrics such as conversion rate, sales, or bounce rate, the most effective version can be identified.

What is gradual release? Gradual (or gray) release selects a specific target audience (e.g., by gender, city, age) and rolls out a feature to a small percentage of traffic. The feature’s performance is monitored and, if successful, the rollout is expanded until full deployment.

Apollo Platform is Didi’s online A/B testing and gradual‑release system. It supports a wide range of scenarios—from mobile apps and backend APIs to algorithmic strategies—by providing:

Visual operation interface and experiment design tools

Scientific traffic segmentation and one‑to‑one experiment configuration

Complex rule‑based targeting, service degradation, configuration sync, and dynamic model upgrades

The platform enables product managers, growth engineers, and data analysts to calculate experiment metrics, perform multidimensional data analysis, and obtain optimal solutions for various use cases.

Key best‑practice guidelines derived from Didi’s experience:

1. Confirm the variable to test : Keep all other aspects unchanged so that the observed effect can be attributed solely to the variable.

2. Define critical metrics before the experiment : Choose a small set of meaningful indicators (e.g., click‑through rate, registration completion rate) rather than tracking many irrelevant metrics.

3. Control experiment duration : Determine the required sample size and run the test long enough to achieve statistical significance, but avoid unnecessary prolongation.

4. Ensure controlled grouping : Randomly assign users to groups and avoid temporal or demographic biases that could confound results.

5. Analyze results with confidence levels : Evaluate both the magnitude of metric changes and their statistical confidence to judge whether findings will generalize to the full user base.

Additional considerations include sample characteristics, expected uplift magnitude, and the trade‑off between experiment cost and benefit.

Apollo now powers the majority of Didi’s business lines, handling thousands of weekly releases and nearly a thousand experiment cases, making it a cornerstone of the company’s data‑driven transformation.

Future challenges highlighted are improving experiment efficiency, enhancing Apollo’s usability, reducing experiment costs, and tailoring experiment designs to Didi’s specific business characteristics.

PlatformA/B testingData-drivenDidigradual releaseproduct experimentation
Didi Tech
Written by

Didi Tech

Official Didi technology account

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.