Artificial Intelligence 11 min read

Graph-based Deep Recall Models for Sparse User Behavior in Content Recommendation

The paper proposes graph‑based deep recall models that enrich sparse user behavior sequences in video recommendation by integrating content knowledge graphs and adaptive attention mechanisms, demonstrating that variants such as GADM, SGGA, and SGGGA significantly boost click‑through rates in online experiments.

DaTaobao Tech
DaTaobao Tech
DaTaobao Tech
Graph-based Deep Recall Models for Sparse User Behavior in Content Recommendation

This article presents a systematic study of recall techniques applied to content recommendation, focusing on the challenge of sparse user behavior.

Background: In video recommendation, many users generate sparse interaction sequences, making sequence‑based modeling ineffective. Existing solutions include leveraging side information, pre‑training on dense users, cross‑domain behavior transfer, knowledge‑graph (KG) construction, and user‑to‑user (u2u) graphs.

Proposed Approach: From a graph‑model perspective, the authors explore extending user behavior graphs by integrating content‑based KG information. Two main families of graph‑based recommendation are discussed: embedding‑based (e.g., KGE, CKE, DKN, SHINE) and path‑based (e.g., RippleNet, PER, KGAT). The paper introduces a novel sparse‑behavior sequence expansion method that combines video‑to‑video (v2v) expansion with KG augmentation.

Model Variants: GADM (Graph Attention Deep Recall Model) : builds a basic topology and applies multiple aggregation strategies, initially inspired by RippleNet attention, then enhanced with KGAT‑style adaptive attention. SGGA (Self‑adaption Generative Graph Attention) : adds an encoder‑decoder architecture to model the influence between consecutive behaviors, producing personalized edge weights (Entity‑to‑Entity attention). SGGGA (Self‑adaption Generative Gating Graph Attention) : extends SGGA by introducing KG‑guided gating on each relation, allowing the model to filter and weight graph features.

Results: The deployed models achieve notable online improvements—GADM‑v1 yields +1.5% pCTR and +2.65% uCTR; GADM‑v2 improves +2.1% pCTR and +1.4% uCTR. The SGGA and SGGGA variants further enhance performance by adaptively weighting graph edges.

Conclusion: Graph and KG‑based methods effectively mitigate sparse behavior issues by constructing richer topological connections between users and content. Ongoing work aims to refine these models and explore additional graph‑based extensions.

Deep LearningAttentionrecommendation systemsGraph Neural Networksknowledge graphsparse behavior
DaTaobao Tech
Written by

DaTaobao Tech

Official account of DaTaobao Technology

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.