Practical Insights into Online Experiment Design and Analysis at Tencent Lookpoint
The presentation offers a comprehensive overview of online experiment fundamentals, design variations, and real-world case studies from Tencent Lookpoint, emphasizing hypothesis validation, causal analysis, best practices, and actionable recommendations for improving product growth and decision‑making.
Speaker Qian Cheng, senior data R&D engineer at Tencent, introduces the fundamentals of online experiments, defining experiments as controlled interventions to validate hypotheses and describing AB testing as a special case of completely randomized designs.
He discusses various experiment types beyond simple AB tests, such as synthetic control, time‑slice rotation, and multi‑armed bandits, and explains how experiment design must consider target population, objectives, and intervention methods.
Case studies from the Tencent Lookpoint platform illustrate experiment implementation in information‑flow scenarios, including flash‑screen ad timing, holiday red‑packet activity, and hotspot recommendation upgrades, highlighting the importance of analyzing both intention‑to‑treat and complier‑average causal effect metrics.
He emphasizes best practices: align analysis unit with experiment unit, verify sample homogeneity, use causal inference tools (instrumental variables, matching, uplift modeling), and construct effect maps to detect heterogeneous impacts.
The talk concludes with a standardized experiment workflow—hypothesis formulation, design, deployment, monitoring, and result interpretation—and a Q&A addressing challenges in B‑side experiments, sample size, spillover effects, and risk assessment.
DataFunSummit
Official account of the DataFun community, dedicated to sharing big data and AI industry summit news and speaker talks, with regular downloadable resource packs.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.