Large Models and Recommendation Systems: Challenges, Opportunities, and Future Directions
At CNCC 2023, leading researchers and industry experts convened to examine how large language models can transform recommendation systems, outlining four core challenges—model integration, fluency versus intelligence, hallucination versus deception, and user understanding—while highlighting opportunities such as multimodal content, cold‑start solutions, zero‑shot ranking, instruction‑driven algorithms, and responsible, interactive recommendation pipelines.
This article reports on a technical forum held during CNCC 2023 in Shenyang, focusing on the intersection of large models (LLMs) and recommendation systems. The forum featured presentations from leading experts including Tsinghua University's Professor Zhang Min, University of Science and Technology of China's Professor He Xiangnan, and others.
Professor Zhang Min discussed the challenges and opportunities of recommendation systems in the large model era, identifying four key challenges: how to combine large models with recommendation systems, distinguishing between fluency and intelligence, addressing hallucination versus deception issues, and understanding users. She also highlighted opportunities including new application scenarios, user interaction methods, and responsible recommendation systems.
Xiaohongshu's Vice President Feng Di presented on the company's innovative exploration in recommendation systems, covering multimodal content understanding, cold start problems, multi-objective modeling, and the combination of recommendation systems with large models.
Professor He Xiangnan from the University of Science and Technology of China discussed why large models should be used in recommendation systems, focusing on representation, generalization, and generation capabilities. He presented solutions for aligning recommendation tasks and information modalities with large models.
Professor Zhao Xin from Renmin University introduced the latest applications of large language models in recommendation systems, covering zero-shot ranking, instruction-based recommendation algorithms, and large language agents as collaborative filtering learners.
Dr. Tang Ruiming from Huawei Noah's Ark discussed how recommendation systems can benefit from large language models, covering where and how to apply them in the recommendation pipeline, and future directions including addressing sparse scenarios and improving interaction experiences.
The article also includes a roundtable discussion where experts explored the unique advantages of large models in recommendation systems, challenges faced, and skills needed for practitioners and students in this field.
Xiaohongshu Tech REDtech
Official account of the Xiaohongshu tech team, sharing tech innovations and problem insights, advancing together.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.