How Tencent Angel’s AI Platform Won the 2023 CIE Science & Tech Award
Tencent’s Angel machine‑learning platform, recognized with the 2023 China Institute of Electronics Science & Technology Award, showcases breakthrough distributed training, high‑efficiency caching, adaptive sampling, multimodal fusion, and graph‑model search technologies that dramatically improve large‑scale AI model performance and cost.
China Institute of Electronics announced that the joint project "Key Technologies and Applications of Angel Machine‑Learning Platform for Massive Data" led by Tencent and partnered with Peking University and Beijing University of Science and Technology won the 2023 Science & Technology Award’s First Prize for Technological Progress.
Why Angel Platform? Four Core Technology Breakthroughs
Angel’s distributed parameter‑server architecture separates model‑parameter storage from computation, enabling support for ever larger models by adding servers. It achieves world‑leading performance in all‑to‑all communication caching, adaptive pre‑sampling, and graph‑structure search.
For massive data and ultra‑large model training, Angel reduces communication overhead by 80% and delivers 2.5× the training speed of mainstream solutions, thanks to efficient network and cache scheduling on Tencent Cloud’s Xingmai network.
Its unified memory‑and‑disk storage mechanism doubles model capacity and doubles training performance compared to industry standards, overcoming GPU memory limits for TB‑scale models.
In multimodal scenarios, Angel’s end‑to‑end ranking recommendation technology boosts ad recall by over 40%, while its adaptive graph‑network structure search improves graph‑model training performance by 28×, addressing the difficulty of mining massive graph data.
Supporting Tencent Hunyuan Large Model
Since its 2015 launch, Angel has supported PS‑Worker distributed training and billion‑parameter LDA models. Open‑sourced in 2017, it solved heterogeneous network communication issues and later achieved breakthroughs in scalable graph models and unified GPU memory storage.
For the Hunyuan large model, Angel introduced the Angel PTM and Angel HCF frameworks, enabling single‑task training on tens of thousands of GPUs with 2.6× efficiency over open‑source frameworks and 50% cost savings. Inference speed improved by 1.3×, reducing latency from 10 seconds to 3‑4 seconds in text‑to‑image generation.
Angel also offers a one‑stop platform for model development to deployment via APIs, accelerating applications across over 400 Tencent products, including Tencent Meeting, News, and Video.
Through these innovations, Angel powers Tencent’s advertising, conferencing, and other services, and supports external industry customers via Tencent Cloud, driving digital transformation and intelligent automation.
Tencent Tech
Tencent's official tech account. Delivering quality technical content to serve developers.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.