Model Innovation Forum: Advances in Recommendation Systems and Dense Retrieval
The Model Innovation Forum brings together academic and industry experts to discuss cutting‑edge recommendation system models, including efficient dense retrieval, Baidu ranking architectures, offline reinforcement learning, and large‑model inspirations, offering attendees deep technical insights and practical applications.
Recommendation systems have become indispensable in modern internet applications, helping users discover interesting content, improving satisfaction, and increasing platform value. Models are the core engine that maximizes the value of massive data and optimizes recommendation decisions, driving user experience and business growth.
The Model Innovation Forum invites experts from academia and industry to explore various model innovation patterns in recommendation systems and their real‑world applications, allowing the audience to learn about the latest research and practical solutions.
Key topics include efficient dense retrieval techniques, in‑depth discussions of Baidu's recommendation ranking models and architecture, offline reinforcement learning for recommendation, and insights from large language models.
The summit will be livestreamed; participants can scan the QR code in the poster to register and watch.
Speaker: Ji Houye – JD.com PhD trainee, responsible for video/live stream recall, focusing on graph neural networks and recommendation systems, with 10+ papers in top conferences and journals.
Speaker: Li Haitao – Master's student at Tsinghua University, research interests in information retrieval and natural language processing, author of multiple SIGIR papers.
Talk Title: Constructing Tree‑based Index for Efficient and Effective Dense Retrieval
Outline: Dense Retrieval (DR) improves first‑stage retrieval but faces efficiency challenges due to dense embeddings. Approximate Nearest Neighbor (ANN) speeds up inference but often degrades performance. The proposed JTR jointly optimizes a tree‑based index and query encoder with a unified contrastive loss, tree‑based negative sampling, and overlapped clustering, achieving better retrieval performance while maintaining high efficiency.
Audience Takeaways: Conditions for a good tree index, joint optimization of tree index and encoder, and further optimization of non‑differentiable clustering.
Speaker: Zhu Shua – Senior R&D engineer at Baidu, leads Feed stream recommendation coarse‑ranking, with extensive experience in large‑scale DNN training frameworks and recommendation systems.
Talk Title: Reflections on Baidu Recommendation Ranking Technology
Outline: Three parts covering feature design, model algorithms, and system architecture for Baidu Feed recommendation ranking, sharing practical lessons, multi‑recall fairness, data drift mitigation, and sparse routing networks for complex multi‑objective models.
Audience Takeaways: Differences between coarse and fine ranking, ensuring fairness across recall branches, handling data drift, and deploying large multi‑objective models with sparse routing.
Speaker: Gao Chongming – Algorithm intern at Kuaishou, PhD candidate at USTC, researching offline reinforcement learning for recommendation systems.
Talk Title: Offline Reinforcement Learning for User Satisfaction Optimization and Evaluation
Outline: Recommendation as a decision problem best addressed by RL; challenges of offline RL due to lack of real‑time interaction; solutions for information cocoons and Matthew effect; methods for developing and evaluating offline RL strategies.
Audience Takeaways: Building offline RL‑based recommendation policies, proper evaluation of offline RL algorithms, and differences between offline and online metric optimization.
Speaker: Zhang Pengtao – Technical expert at Sina Weibo, PhD in computer applications, focusing on personalized recommendation, large language models, and user profiling.
Talk Title: Can We Obtain Large Models for Recommendation Systems?
Outline: Leveraging the strong memory capabilities of NLP large models to design independent memory mechanisms for recommendation models, covering NLP large model inspirations, HCNet & MemoNet architectures, and experimental results.
Audience Takeaways: How NLP large models inspire recommendation model design, enhancing memory in recommendation models, and the impact of memory enhancements on performance.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.