Bridging Compute and Applications: 58.com AI Lab’s Large‑Model Platform and AI Agent Solutions
In this article, 58.com AI Lab senior director Zhan Kunlin explains how the company built a multi‑layer AI platform, created a vertical large‑language model called LingXi, and developed an AI Agent system with RAG capabilities to accelerate practical AI applications across various business scenarios.
At the recent WOT Global Technology Innovation Conference, Zhan Kunlin, senior director of 58.com and head of the AI Lab, shared the company’s approach to turning AI research into usable tools by leveraging large models and AI agents.
The AI Lab first embraced large language models (LLMs) in 2023, developing a proprietary vertical LLM named LingXi, which has passed the generative AI service registration (Beijing—LingXi—202407050027). This model serves as the foundation for upgrading the traditional intelligent dialogue platform into a full AI Agent platform.
58.com’s business model connects B‑to‑C users across housing, recruitment, automotive, and local services. To close the gap between compute resources and AI applications, the AI platform is organized into three layers: a low‑level AI compute engine that manages CPU/GPU resources and offers “model‑as‑a‑service”; a middle layer integrating multimodal and LLM capabilities for developers to call and fine‑tune; and an application layer providing solutions such as intelligent dialogue, digital humans, and agents for rapid deployment across sales, customer service, product, and internal workflows.
The platform integrates open‑source LLMs with LoRA/QLoRA fine‑tuning and MoE training strategies, enabling end‑to‑end model fine‑tuning and one‑click deployment via inference frameworks like vLLM. Continuous optimization includes MoE architectures to reduce parameter load and S‑LoRA for multi‑model deployment on limited GPU resources.
To address domain‑specific needs, the AI Lab built the LingXi vertical model by incrementally pre‑training on 58.com’s proprietary data combined with public datasets, followed by careful fine‑tuning and alignment to avoid catastrophic forgetting. Evaluations on public benchmarks and internal scenarios show LingXi outperforms comparable open‑source models, and its safety model excels in content moderation tasks versus GPT‑4.
An AI Agent platform was created on top of LingXi, featuring role‑play capabilities (e.g., HR or sales assistants), tool‑calling, and a custom RAG module that lets users upload documents to build knowledge bases. Unlike many agent platforms that only generate UI pages, this solution provides APIs for seamless integration into custom applications.
Practical deployments include a sales‑training chatbot that simulates role‑play for new agents, and a SQL assistant that generates and corrects queries, dramatically speeding up data‑driven decision making. To date, over 50 AI‑powered applications have been launched across sales, customer service, user experience, and internal operations.
Zhan concludes with three takeaways: vertical models built on open‑source foundations deliver strong performance; model size is not the sole determinant of success—fit‑for‑purpose matters more; and the ultimate goal of platform, model, and agent development is to incubate high‑impact applications that transform business.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.