Causal Inference and Uplift Modeling for Insurance Recommendation and Explainability
This article explains how uplift sensitivity prediction, Bayesian causal networks, and decision‑path construction are applied to improve insurance product, coupon, and copy recommendations on the Fliggy platform, detailing modeling approaches, evaluation metrics, and practical outcomes of the causal inference framework.
The presentation introduces causal inference techniques used in Fliggy's insurance recommendation module, covering four main parts: uplift sensitivity prediction, its applications, Bayesian causal networks, and decision‑path construction with explainability.
Uplift Sensitivity Prediction – Describes the business problem of estimating the average treatment effect (ATE) of a new marketing action versus the baseline, focusing on identifying user groups (e.g., Persuadables) that respond positively to insurance recommendations. It discusses the counterfactual challenge, individual treatment effect (ITE) estimation, and the decomposition of uplift into conversion rates under treatment and control.
Modeling considerations include single‑variable vs. multi‑variable uplift, direct vs. indirect probability estimation (e.g., tree‑based, LR, GBDT, deep models), and the use of T‑learner, S‑learner, X‑learner frameworks.
Application of Uplift – Shows how uplift guides three use cases: insurance product recommendation, coupon recommendation, and copy recommendation. It explains the business constraints, AB experiment design, sample construction, feature importance analysis, and the evaluation of models (LR+T‑Learner performed best). The uplift model increased conversion by 5.8% and ROI by 1.2 in online tests.
Bayesian Causal Network – Introduces Bayesian networks to capture causal relationships among user attributes, events, and creative copy. It outlines four learning tasks: structure learning, parameter estimation, inference, and attribution, and describes the use of Hockman scoring with greedy search for structure discovery.
Decision‑Path Construction and Explainability – Details how user, event, and creative nodes are defined, how conditional probabilities are computed, and how the network supports inference (likelihood weighting, loopy belief propagation) and attribution to explain why certain treatments succeed.
The Q&A section confirms that the uplift model was validated online, discusses feature selection, bias concerns, and future directions such as creative copy recommendation.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.