Artificial Intelligence 17 min read

How Minute‑Level Time Decay Boosts User Retention Modeling in Recommendation Systems

This article presents a novel minute‑level future‑reward framework with dual‑delay incentives, activity‑based attribution, multi‑task delayed modeling, and sequential streaming training that dramatically improves user retention prediction accuracy and real‑time performance in large‑scale recommendation platforms.

Zhihu Tech Column
Zhihu Tech Column
Zhihu Tech Column
How Minute‑Level Time Decay Boosts User Retention Modeling in Recommendation Systems

Introduction

In the era of information explosion, recommendation systems are shifting from pure traffic consumption to value‑driven strategies. User retention, a key indicator of platform health, faces challenges such as data sparsity, signal ambiguity, heterogeneous user groups, and delayed feedback.

Background and Challenges

Data sparsity: collapsing daily behavior into a binary signal loses rich information.

Signal ambiguity: multiple sources interfere with retention signals.

Group heterogeneity: high‑activity and low‑activity users respond differently to content.

Real‑time modeling: delayed observation of retention signals hampers timely decisions.

Proposed Retention Modeling Solution

We break the limitation of traditional day‑level modeling by introducing a minute‑level future reward framework combined with an activity attribution mechanism . The approach includes:

Dual‑delay reward : a short‑term ten‑minute sliding window and a long‑term adaptive weighting via counterfactual causal inference.

Minute‑level signal construction : progressive time‑window slicing creates high‑density behavior units for near‑term and hour‑level windows for long‑term patterns.

Multi‑task delayed modeling : jointly learns retention and click‑through tasks, sharing features while applying stop‑gradient to the CTR branch.

Activity‑adaptive attribution gate : modulates content feature importance based on user activity level.

Sequential streaming training : progressive learning updates parameters as soon as the first time window is observed, avoiding waiting for delayed signals.

Mathematical Formulation

The future retention reward for a user‑content pair is expressed as a weighted sum of minute‑level counts:

R = Σ_t α_t * f(count_t)

where

α_t

is a time‑decay attention weight emphasizing recent visits.

Application scenario
Application scenario

Effect Overview

Deployed in Zhihu’s full‑stack recommendation pipeline, the model improves retention metrics, boosts content ecosystem vitality, and raises user engagement across recall, coarse‑ranking, and fine‑ranking stages.

Retention model structure
Retention model structure

Conclusion and Outlook

The minute‑level retention modeling method significantly enhances accuracy and real‑time capability, achieving large‑scale A/B test gains. Future work will explore listwise retention modeling and integrate multimodal pretrained representations to further improve personalized recommendation.

References

Ding R, Xie R, Hao X, et al. Interpretable User Retention Modeling in Recommendation. In: Proceedings of the 17th ACM Conference on Recommender Systems. 2023:702‑708.

Cai Q, Liu S, Wang X, et al. Reinforcing user retention in a billion‑scale short video recommender system. In: Companion Proceedings of the ACM Web Conference 2023. 2023:421‑426.

Zhao K, Zou L, Zhao X, et al. User retention‑oriented recommendation with decision transformer. In: Proceedings of the ACM Web Conference 2023. 2023:1141‑1149.

Zhang Y, Wang D, Li Q, et al. User Retention: A Causal Approach with Triple Task Modeling. IJCAI. 2021:3399‑3405.

user retentiondeep learningrecommendation systemreal‑time predictiontime decaymulti‑task modeling
Zhihu Tech Column
Written by

Zhihu Tech Column

Sharing Zhihu tech posts and exploring community technology innovations.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.