Multi-Granularity Attention Model for Group Recommendation (MGAM)
The Multi‑Granularity Attention Model (MGAM) improves group recommendation by extracting subset, group, and superset preferences through hierarchical attention and graph neural networks, fusing them via self‑attention, and achieves state‑of‑the‑art offline results and a 1.2% online CTR lift in Alibaba’s local‑life services.
The paper "Multi-Granularity Attention Model for Group Recommendation" was accepted as a short paper at CIKM 2023. The full paper is available at https://arxiv.org/abs/2308.04017.
1. Introduction Group recommendation aims to recommend items to a set of users (a group) rather than to a single user. In Alibaba's local life services, the problem extends to matching venues (e.g., cloud themes, scenario cards) to groups of users, which we refer to as the "scene‑item" matching problem.
2. Background The local‑life business exhibits strong spatial and temporal heterogeneity: supply is grid‑like and limited to specific business districts, while user demand varies across regions. Traditional single‑user recommendation cannot capture these regional group preferences, motivating a group‑centric approach.
3. Related Work Group recommendation methods are categorized into:
Memory‑based approaches (preference aggregation, score aggregation such as AVG, LM, MS).
Model‑based approaches, including traditional methods (PIT, COM) and deep‑learning methods (DLGR, AGREE).
These methods either ignore intra‑group interactions or assume static user influence across groups.
4. Neural Attention Neural attention models compute a dynamic attention score for each user‑item pair within a group, allowing the model to weight users differently for each candidate item. AGREE is a seminal work in this direction.
5. Proposed Model (MGAM)
MGAM leverages three granularity levels:
Subset Preference Extraction (SubPE) : Users with similar preferences within a group are clustered into subsets. A hierarchical attention network first aggregates user‑level preferences into a subset vector, then applies a subset‑level attention to capture interactions among subsets.
Group Preference Extraction (GPE) : Generates a group‑level preference vector directly from the whole group.
Superset Preference Extraction (SupPE) : Constructs a graph where nodes are groups and edges exist if groups share members. A graph neural network aggregates external group information to produce a superset‑level preference vector.
These three vectors are concatenated and fed into a fusion layer based on self‑attention to dynamically combine multi‑granularity preferences.
6. Fusion and Optimization The fused vector is combined with the candidate item embedding via a Hadamard product, passed through a sigmoid activation to predict the interaction score. The loss function mixes triplet loss and pointwise loss, with hyper‑parameters controlling the balance.
7. Experiments
Datasets: one industrial dataset from Alibaba local life, and three public datasets (Movielens‑1M, Meetup‑NYC, Meetup‑CA). MGAM achieves state‑of‑the‑art performance on all offline metrics.
Online A/B test in the food‑delivery service shows a 1.2% average CTR lift over the baseline AGREE model, confirming practical effectiveness.
8. Conclusion MGAM introduces multi‑granularity attention to better capture user preferences in group recommendation, reducing noise and improving both offline and online performance.
References (selected): McCarthy & Anagnost 1998; Yu et al. 2006; Seo et al. 2018; Cao et al. 2018 (AGREE); Tran et al. 2019 (MoSAN); and many others covering memory‑based, model‑based, and deep learning approaches.
Ele.me Technology
Creating a better life through technology
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.