Enhancing Recommendation System Consistency via Feedback Cascading and LTR Models
The paper proposes a teacher‑student architecture that uses feedback cascading and learning‑to‑rank models with ΔnDCG‑based loss and bid‑aware optimizations to align coarse and fine sorting stages, address sparse data, and improve recommendation consistency, achieving a 4.8% RPM lift in A/B tests.
This paper addresses consistency challenges in multi-stage recommendation systems through feedback cascading and learning-to-rank (LTR) models. The study proposes a teacher-student architecture where fine-tuned models guide coarse sorting, improving alignment between stages. Key innovations include ΔnDCG-based loss functions and bid-aware ranking optimizations. Experimental results show 4.8% RPM increase during A/B testing.
The work tackles two core issues: sparse data in coarse sorting (SSB problem) and ranking inconsistency between modules. By leveraging end-to-end sequence learning and constraint-based optimization, the approach achieves better score alignment with fine-tuned models. The final solution combines pointwise loss constraints with sequence-aware weighting to maintain both ranking consistency and bid responsiveness.
Alimama Tech
Official Alimama tech channel, showcasing all of Alimama's technical innovations.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.