Feedback‑Aware Deep Matching Model for Music Recommendation in Tmall Genie
This article presents DeepMatch, a behavior‑sequence based deep learning recall model enhanced with play‑rate and intent‑type embeddings, describes its self‑attention architecture, factorized embedding parameterization, multitask loss design, distributed TensorFlow training tricks, and demonstrates significant offline and online improvements in music recommendation performance.
Background Traditional recommendation systems consist of candidate generation (matching) and ranking, with the matching stage crucial for overall performance. Recent practice shows that behavior‑sequence deep learning models combined with high‑performance approximate nearest‑neighbor search (DeepMatch) can achieve both accuracy and speed.
Motivation Existing DeepMatch models lack key signals such as negative feedback (play rate) and query intent type, which are important in scenarios like Tmall Genie music recommendation.
Method
1. Input Representations
• Item Embedding: shared across datasets, mapping one‑hot item vectors to low‑dimensional embeddings.
• Position Embedding: learned embeddings for each position in the behavior sequence.
• Play Rate Embedding: continuous play‑completion rate [0,1] projected into the same space as item embeddings; for the original dataset, a default value of 0.99 is used.
• Intent Type Embedding: categorical intent (e.g., exact play, recommendation) also embedded into a low‑dimensional space.
2. Factorized Embedding Parameterization Inspired by ALBERT, item one‑hot vectors are first projected to a small dimension E and then to the hidden size H, reducing parameters from O(V×H) to O(V×E+E×H).
3. Feedback‑Aware Multi‑Head Self‑Attention The model uses self‑attention where Query, Key, and Value are derived from item, position, play‑rate, and intent embeddings, allowing the attention mechanism to incorporate user feedback.
4. Loss Functions
• Positive Feedback: Sampled Softmax Loss (using learned_unigram_candidate_sampler for best results).
• Negative Feedback: Sigmoid Cross‑Entropy Loss applied to low play‑rate items.
• Total Loss: sum of the two losses, enabling multitask learning that pushes negative items away while pulling positive items closer.
Training Details
Distributed training is performed with TensorFlow’s ParameterServer strategy. Key tricks include proper embedding partitioning, deduplication of sampled items within a batch, and flexible masking for attention.
Experiments
Offline metrics (POS@N and NEG@N) show that adding play‑rate and intent embeddings improves POS@N, and incorporating negative‑feedback multitask learning significantly reduces NEG@N. Online A/B tests in Tmall Genie’s “You Might Like” scenario report a +9.2% increase in average playback duration per user.
Conclusion Incorporating feedback signals and intent information into a deep matching model via multitask learning yields both higher relevance and better user satisfaction in large‑scale music recommendation systems.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.