Artificial Intelligence 23 min read

Causal Inference for Recommendation Systems: Disentangling User Interest, Conformity, Long‑Term/Short‑Term Interests, and Debiasing Short‑Video Recommendations

This presentation reviews recent research on applying causal inference to recommendation systems, covering causal embedding for separating user interest and conformity, contrastive learning for disentangling long‑term and short‑term interests, and a debiasing framework for short‑video recommendation that uses watch‑time‑gain metrics and adversarial learning to mitigate duration bias.

DataFunTalk
DataFunTalk
DataFunTalk
Causal Inference for Recommendation Systems: Disentangling User Interest, Conformity, Long‑Term/Short‑Term Interests, and Debiasing Short‑Video Recommendations

The talk introduces three research directions that leverage causal inference to improve recommender systems. First, a causal embedding (DICE) is proposed to disentangle user interest from conformity, assigning independent representations to each factor and using contrastive learning to handle bias and improve interpretability.

Second, the long‑term and short‑term interest disentanglement (CLSR) addresses the challenge of mixed interest signals in sequential recommendation. It builds separate encoders for stable long‑term interests and dynamic short‑term interests, uses proxy labels derived from historical pooling, and employs multi‑task curriculum learning to adaptively fuse the two representations.

Third, the short‑video bias mitigation work (DVR) defines a duration‑independent metric called Watch‑Time‑Gain (WTG) and its ranking‑aware variant DCWTG. An adversarial training scheme forces the recommender to ignore video‑duration features, achieving unbiased recommendations across videos of different lengths.

Extensive experiments on e‑commerce and short‑video datasets demonstrate that each method consistently improves ranking metrics, provides more interpretable representations, and reduces unfair exposure of long videos, confirming the effectiveness of causal‑based designs in recommender systems.

Machine Learningrecommender systemscausal inferenceBias Mitigationinterest disentanglementshort video recommendation
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.