2024 Generative Large Model Security Assessment White Paper Released at CCF China Data Conference
The 2024 Generative Large Model Security Assessment White Paper, jointly authored by the Chinese Academy of Sciences, the Ministry of Public Security's Third Research Institute, and Ant Group's Ant Security Lab, was unveiled at the inaugural CCF China Data Conference, offering a comprehensive review of model risks, ethical concerns, and evaluation methods to guide research, industry practice, and policy making.
Recently, at the inaugural 2024 CCF China Data Conference, the "Generative Large Model Security Assessment White Paper (2024)"—co‑authored by the Intelligent Algorithm Security Key Laboratory (Chinese Academy of Sciences), the Ministry of Public Security’s Third Research Institute, and Ant Group’s Ant Security Lab—was officially released.
The white paper systematically reviews the development status and security risks of nearly 20 generative large models, including GPT, LLaMA, Moss, and Wenxin Yiyan, and analyzes key challenges and mitigation strategies through practical case studies.
It categorizes three major security risk types—ethical risk, technical security risk, and content security risk—covers four evaluation dimensions—ethics, privacy, factuality, and robustness—and outlines two assessment methods: metric‑based measurement and model attack testing, aiming to support academic research, industrial practice, and policy formulation.
Special attention is given to Ant Group’s “Zhi Xiao Bao” triple‑layer security framework, powered by Ant’s self‑developed integrated solution “Ant‑Jian”, which includes the model security testing platform “Ant‑Jian” and the risk defense platform “Tian‑Jian”, targeting AI evaluation and security defense to ensure safe, controllable, and reliable deployment of large models.
Since 2022, generative large models such as ChatGPT have attracted global attention, reshaping the AI landscape and driving China’s digital economy and intelligent transformation. However, emerging risks—hallucinations, confidential data leakage, privacy breaches, malicious misuse, technical vulnerabilities, and compliance issues—pose significant challenges.
The Chinese government has responded with policies like the "Interim Measures for the Administration of Generative AI Services," establishing principles and regulatory requirements for safety, risk control, and compliance.
The release ceremony featured senior representatives from the Chinese Academy of Sciences, the Ministry of Public Security, Ant Group, and Zhejiang University, who emphasized that the white paper aims to bolster safe, reliable, and controllable AI ecosystems and promote healthy development of generative AI technologies.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.