Artificial Intelligence 22 min read

Practice of Causal Inference Based on Representation Learning: RCT Standards, Joint Tree‑Neural Modeling, RCT‑ODB Fusion, and Feature Decomposition

This article presents a comprehensive industrial‑level guide to causal inference using representation learning, covering proper RCT experiment design, joint modeling of tree and neural networks, fusion of RCT with observational data, and advanced feature‑decomposition techniques to mitigate bias.

DataFunTalk
DataFunTalk
DataFunTalk
Practice of Causal Inference Based on Representation Learning: RCT Standards, Joint Tree‑Neural Modeling, RCT‑ODB Fusion, and Feature Decomposition

Overview

The session focuses on practical causal inference techniques based on representation learning, divided into four major parts: industrial RCT experiment standards, joint modeling of tree and deep models, RCT‑observational data (ODB) fusion, and feature decomposition.

01 Industrial RCT Experiment Standards

Discusses why RCT data are crucial, the three key properties (comparability & covariate balance, exchangeability, no backdoor paths), and common pitfalls such as high cost and ethical concerns. Provides guidelines for target‑population definition, flow shuffling, and choosing between user‑level and request‑level RCT designs, emphasizing nested designs and online RCT for continuous, low‑cost experimentation.

02 Joint Modeling of Tree Models & Neural Networks

Illustrates causal graphs in industry, showing that unobserved confounders are often absent. Compares tree‑based causal forests (which split only on confounders) with neural networks that may introduce bias when all features are used. Proposes two integration ideas: using tree‑generated confounder embeddings as NN inputs, and employing adversarial learning for feature decomposition.

03 RCT & ODB Fusion Modeling

Introduces propensity‑score (PS) matching and stratification, describing how to align observational data with RCT data layer‑by‑layer. Details three steps: stratify by PS, perform covariate shifting to equalize layer sizes, and construct unbiased treatment/control samples. Highlights the importance of removing instrument variables from PS models.

04 Feature Decomposition

Explains how to decompose covariates X into instrument I, confounder C, and adjustment A using loss functions that enforce independence (I⊥Y|T, A⊥T, C⊥T) and predictive accuracy. Describes orthogonal regularization, multi‑treatment extensions, IPW‑based balancing, and adversarial loss to ensure A cannot predict treatment.

05 Q&A

Answers practical questions about offline debias evaluation, AA testing for user‑level RCT, fairness concerns in long‑term online RCT, feature construction for online experiments, and the relationship between learned representations and propensity scores.

Overall, the material provides a detailed roadmap for applying causal inference methods in large‑scale growth experiments, combining rigorous experimental design with modern representation‑learning models.

Machine Learningcausal inferencepropensity scorerepresentation learningOnline ExperimentFeature DecompositionRCT
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.