How to Better Leverage Data in Causal Inference
This presentation introduces two recent works from Ant Group that improve causal inference by explicitly using historical control data to reduce selection bias and by fusing heterogeneous multi‑source data, describing the GBCT and WMDL methods, their theoretical foundations, experimental results, and practical applications in finance.
Background – Traditional machine‑learning prediction assumes i.i.d. data, while causal inference seeks to understand the mechanism behind outcomes (e.g., whether smoking causes lung cancer). Two types of data are important: observational data and data from randomized controlled experiments.
Goal – The talk explains how to better use data for causal inference from two perspectives: (1) leveraging historical control data to explicitly mitigate confounding bias, and (2) causal inference under multi‑source data fusion.
1. GBCT (Debiased Causal Tree)
Traditional causal trees split nodes to maximize heterogeneity of treatment effects but do not guarantee homogeneous distributions after splitting. GBCT introduces a split criterion that combines the usual outcome‑fitting loss with a confounding entropy term, explicitly reducing selection bias using pre‑intervention data. The method estimates the Conditional Average Treatment Effect on the Treated (CATT) and uses a weighted loss that balances pre‑intervention and post‑intervention data.
The weighting module is derived from efficiency‑bound theory and incorporates (i) domain‑distribution balancing, (ii) causal‑information weighting, (iii) propensity‑score weighting, and (iv) noise‑based weighting, enabling the tree to align experimental and control groups before estimating effects.
Experiments include (i) synthetic data with varying selection‑bias strength, showing GBCT’s robustness compared to meta‑learners and causal forests, and (ii) real credit‑card limit‑increase data, where GBCT consistently outperforms baselines, especially on biased datasets.
2. WMDL (Weighted Multi‑Domain Direct Learning)
When multiple data sources (domains) are available, the goal is to estimate domain‑specific causal effects. WMDL models the outcome as a sum of main effects, treatment, and domain‑specific components, and directly learns the causal effect (δ) without first estimating separate outcomes.
The framework consists of three modules: propensity‑score estimation, outcome‑model learning, and a causal‑information‑aware weighting module. The weighting module balances domain distribution differences, emphasizes samples with overlap, and down‑weights noisy observations, yielding a doubly‑robust estimator.
Empirical results on synthetic and real datasets demonstrate that WMDL achieves lower variance and higher accuracy than traditional methods, and ablation studies confirm the importance of each module.
Business Application
In Ant Group’s credit‑risk scenario, GBCT uses historical user behavior before a credit‑limit increase to correct selection bias, leading to more accurate post‑intervention effect estimates. The method helps decision‑makers control balances and risks of credit products.
Q&A Highlights
GBCT vs. Difference‑in‑Differences: both use historical data, but GBCT aligns groups via explicit bias correction, whereas DID assumes a fixed gap.
GBCT’s advantage under unobserved confounders stems from using confounding entropy to reduce bias.
Comparison with Double Machine Learning shows GBCT’s tree‑based approach leverages historical data more effectively.
References
Tang et al., "Debiased Causal Tree: Heterogeneous Treatment Effects Estimation with Unmeasured Confounding", NeurIPS 2022.
Li et al., "Robust Direct Learning for Causal Data Fusion", ACML 2022.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.