Artificial Intelligence 14 min read

Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction

The article describes Alibaba's approach to distilling privileged features for CTR prediction using a calibration-compatible listwise distillation loss (CLID) that normalizes teacher and student outputs within sessions to align top‑ranking probabilities, improving both accuracy and ranking while preserving calibration.

Alimama Tech
Alimama Tech
Alimama Tech
Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction

本文介绍阿里妈妈展示广告Rank团队对优势特征的应用实践。优势特征是指预估模型在线无法获取但离线能用于提升模型能力的特征。一种经典的优势特征使用方法是优势特征蒸馏(PFD)[1,2]:PFD方法使用全部特征(包括优势特征)来训练教师模型,然后利用以非优势特征(离在线均可得的常规特征)为输入的学生模型蒸馏教师模型能力,并用于在线打分。

对于预估模型,我们通常会使用准度和排序能力来作为评价指标,针对优势特征蒸馏这个问题,我们发现pointwise和listwise蒸馏损失分别存在一定问题:

pointwise蒸馏损失 :虽然pointwise损失能很好地保证准度,但其在排序能力上表现要弱于使用listwise损失函数进行蒸馏。

listwise蒸馏损失 :直接使用listwise损失函数进行蒸馏会造成预估不准的问题,这对于依赖准度的广告系统来说难以接受。

一种同时提升准度与排序能力的思路是使用 scale-calibrated ranking loss,例如阿里妈妈的JRC [3]以及Google的RCR [4] 。本文中,我们针对优势特征蒸馏问题的特性,设计了 scale-calibrated listwise distillation loss(准度兼容的listwise蒸馏损失)CLID:CLID 以session粒度进行listwise蒸馏,具体来说,我们将学生和教师模型输出的概率分别进行session内的归一化,得到「样本排在session top」的概率,随后对学生和教师模型输出的「样本排在session top」概率对齐,在实现蒸馏教师模型排序能力的同时也在理论上保证了学生模型的准度不被破坏。

基于该项工作的论文已被WSDM 2024接收,欢迎阅读交流~

论文: Calibration-compatible Listwise Distillation of Privileged Features for CTR Prediction

作者 :Xiaoqiang Gui, Yueyao Cheng, Xiang-Rong Sheng, Yunfeng Zhao, Guoxian Yu, Shuguang Han, Yuning Jiang, Jian Xu, Bo Zheng

链接: https://arxiv.org/abs/2312.08727

CTR predictionAI in advertisingListwise distillationMachine learningPrivileged features
Alimama Tech
Written by

Alimama Tech

Official Alimama tech channel, showcasing all of Alimama's technical innovations.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.