Product Management 14 min read

Designing an AB Experiment System for User Growth Scenarios

This article presents a comprehensive AB testing framework tailored for new‑user growth scenarios, detailing the challenges of early traffic allocation, the scientific validation of a new experiment system, real‑world case studies, and practical guidelines for evaluation and implementation.

DataFunSummit
DataFunSummit
DataFunSummit
Designing an AB Experiment System for User Growth Scenarios

01 New‑User Scenario Experiment Challenges

The UG (User Growth) funnel moves users from acquisition (Paid Ads, ASO, SEO) through activation, retention, and eventual churn. Traditional platform‑based split‑testing often fails because many new users (≈18.6%) do not generate a stable device ID at launch, leading to selection bias and wasted traffic.

Client‑side local split‑allocation can assign users instantly at first launch, but data shows >21% of users are assigned to different groups across reinstallations, breaking randomization.

Furthermore, evaluation timing lags behind the intervention, causing survivor bias in metric calculation.

02 New Experiment System and Scientific Validation

To address the above issues, a new experiment system is proposed that selects a split‑allocation ID based on three principles: compliance, immediacy (available at first launch), and uniqueness (stable within a single install cycle and one‑to‑one with metric IDs). The chosen ID achieves >99.7% one‑to‑one matching with metric IDs.

Split‑allocation capability is verified through two methods: platform‑based randomization (ensuring uniform bucket distribution) and client‑side randomization (ensuring orthogonal multi‑layer experiments). Statistical tests (category test, t‑test, p‑value uniformity) confirm both uniformity and orthogonality.

03 Application Case Study

A real UG scenario is examined where the experiment aims to improve retention rate during the new‑user onboarding (NUJ) phase. Two split‑allocation strategies are compared: metric‑ID based (uniform but low lift) and local‑ID based (higher new‑device count and retention). The local‑ID experiment shows a 1% increase in effective new devices and a modest retention boost, though some core metrics show negative impact.

Further analysis reveals that >20% of users are re‑assigned to different groups after reinstall, breaking randomization and limiting decision‑making confidence.

04 Summary

The existing UG experiment framework cannot fully address early‑stage traffic allocation and evaluation bias; a new system is required.

Split‑allocation ID must satisfy compliance, immediacy, stability, and one‑to‑one mapping with metric IDs.

Evaluation in new‑user scenarios becomes multi‑dimensional, focusing on both effective new‑device count and retention rather than a single metric.

The proposed system transforms one‑dimensional optimization into a two‑dimensional one (DNU × LT), delivering higher device conversion and retention while tolerating minor LT trade‑offs.

Overall, the new AB experiment system enables earlier, more reliable traffic split, improves statistical soundness, and yields measurable growth benefits for user acquisition pipelines.

MobileAB testinguser growthdata analysisexperiment designretention
DataFunSummit
Written by

DataFunSummit

Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.