Artificial Intelligence 11 min read

Deep Match to Rank Model for Personalized Click-Through Rate Prediction

This article presents the Deep Match to Rank (DMR) model, which integrates matching and ranking stages in recommendation systems by jointly learning user‑to‑item and item‑to‑item representations with attention mechanisms, achieving significant CTR and DPV improvements in both offline experiments and large‑scale online deployments.

DataFunTalk
DataFunTalk
DataFunTalk
Deep Match to Rank Model for Personalized Click-Through Rate Prediction

Background In e‑commerce recommendation, the task is to select the most attractive items for a user from a massive candidate pool. Traditional two‑stage pipelines first generate a shortlist of candidates (matching) using collaborative filtering, then rank them with a CTR prediction model. Personalization is crucial for CTR performance.

Model Overview The proposed Deep Match to Rank (DMR) model augments the ranking stage with explicit user‑to‑item (U2I) relevance modeling, inspired by matching techniques. DMR consists of two sub‑networks: a User‑to‑Item network and an Item‑to‑Item network, whose outputs are combined with a standard MLP before the final prediction.

User‑to‑Item Network Inspired by factorization, the network computes the inner product between a user representation (derived from the user's behavior sequence) and an item representation. User behavior vectors are weighted by an attention mechanism that uses positional encoding as query features. The weighted sum produces a fixed‑length user vector, which is then transformed by a fully‑connected layer. The mathematical formulation is shown in the figure below.

Item‑to‑Item Network This sub‑network captures U2I relevance indirectly by computing item‑to‑item (I2I) similarity between the target item and items in the user's behavior sequence using additive attention. The summed attention weights provide an alternative relevance signal that complements the direct inner‑product approach.

Training Objective DMR introduces an auxiliary DeepMatch loss that predicts the next clicked item using a softmax over all items, approximated with negative sampling to reduce computational cost. The loss combines a sigmoid term for positive samples and negative samples, and is added to the main ranking loss, encouraging larger inner‑product scores for truly relevant items.

Experiments Offline experiments on Alibaba’s public dataset and internal 1688 recommendation data demonstrate consistent gains. Online A/B tests on the 1688 "For You" recommendation service show a 5.5% CTR lift and a 12.8% DPV increase, leading to full rollout of DMR.

Results and Outlook The DMR model was submitted as "Deep Match to Rank Model for Personalized Click‑Through Rate Prediction" and accepted as an oral paper at AAAI‑20. The framework enables seamless integration of matching signals into ranking models and will serve as a foundation for future CTR improvements.

References [1] Deep Interest Network for Click‑Through Rate Prediction – KDD 2018 [2] Deep Interest Evolution Network for Click‑Through Rate Prediction – AAAI 2019 [3] Deep Neural Networks for YouTube Recommendations – RecSys 2016 [4] Attention Is All You Need – NeurIPS 2017 [5] Neural Machine Translation by Jointly Learning to Align and Translate – ICLR 2015 [6] Distributed Representations of Words and Phrases and their Compositionality – NeurIPS 2013 [7] NAIS: Neural Attentive Item Similarity Model for Recommendation – TKDE 2018

personalizationdeep learningCTR predictionRankingRecommendation systemsmatching
DataFunTalk
Written by

DataFunTalk

Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.