Causal Inference for Incentive and Supply‑Demand Optimization in Tencent Weishi
This article presents a comprehensive overview of applying causal inference techniques to Tencent Weishi's cash incentive and video supply‑demand optimization, detailing business modeling, algorithmic frameworks, treatment representations, constrained multivariate causal models, experimental evaluations, and practical deployment insights.
Introduction – The talk shares the application of causal inference methods in Tencent Weishi’s cash‑incentive and supply‑demand scenarios, outlining three main topics: causal inference with incentive algorithms, causal inference with supply‑demand regulation, and a constrained continuous multivariate causal model.
Speaker – Zheng Jia, Tencent 11‑level researcher; edited by Wang Yanhong, University of Sheffield; produced by DataFun.
01. Causal Inference & Incentive Algorithms
Business background: Weishi distributes cash red‑packets under a fixed budget to maximize next‑day retention and daily usage time. The incentive policy is defined by three “variables”: amount, timing, and quantity.
Policy representation options: (1) one‑hot encoding of a red‑packet sequence, (2) a three‑variable vector, (3) a time‑varying function. Each representation trades off detail versus dimensionality.
Algorithmic frameworks considered: (1) causal inference + multi‑objective constrained optimization (chosen for stability), (2) offline reinforcement learning + constrained optimization (promising but data‑intensive), (3) online reinforcement learning for traffic and budget control. The final choice was framework 1.
Pipeline: offline user‑feature computation → causal model predicts uplift for each policy → multi‑objective optimizer allocates the optimal policy, with pre‑clustering to reduce computation.
02. Causal Inference & Supply‑Demand Regulation
Business background: Short‑video platform needs to allocate exposure ratios of different video categories to improve user experience (3‑second swipe‑through rate) and total watch time, while respecting overall exposure constraints.
Modeling ideas: (1) binary treatment (increase/decrease) with multi‑objective constrained optimization, (2) continuous treatment (exposure proportion) with causal effect curve estimation, (3) embedding constraints directly into causal effect estimation.
Key techniques: clustering users (e.g., K‑means) before estimating treatment effects, using X‑Learner, T‑Learner, DML for binary/continuous treatments, and employing DR‑Net and VC‑Net for deep causal modeling of continuous treatments.
03. Constrained Continuous Multivariate Causal Model (MDPP‑Forest)
Problem: Allocate a high‑dimensional exposure‑ratio vector (e.g., 20‑dimensional) to maximize user watch time under the constraint that the vector sums to 1.
Method: Extend causal forests to handle high‑dimensional continuous treatments by modifying the split criterion to incorporate a “Maximum Difference Point of Preference” (MDPP) for each dimension, normalizing MDPPs to satisfy the sum‑to‑1 constraint, and using multiple trees (forest) for robustness.
Algorithmic acceleration: use weighted quantile sketches to compute left/right means only at quantile points, dramatically reducing complexity.
Experiments: simulated data (6 user features + 2 behavior features) and semi‑synthetic Tencent Weishi data (20‑dim user features, 10‑dim treatment vector). MDPP‑Forest consistently achieved lower main regret and treatment‑square‑error compared with DML, DR‑Net, VC‑Net, and other baselines. Performance improved with forest size up to ~250 trees before over‑fitting.
04. Q&A
Q: Why does normalizing when the summed exposure exceeds 1 work? A: Because the optimization treats exposure ratios as relative values; scaling them proportionally preserves the intended ordering while satisfying the hard sum‑to‑1 constraint.
Conclusion – The session highlighted practical challenges and solutions for deploying causal inference in large‑scale incentive and supply‑demand systems, emphasizing stable DML approaches, careful treatment design, and the novel MDPP‑Forest for constrained multivariate problems.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.