Ant Group’s Morse & ARCLab Wins Both Attack and Defense Tracks in NeurIPS 2024 LLM Privacy Challenge
Ant Group’s Morse & ARCLab team secured the champion title in the attack track and the best practical defense award in the LLM Privacy Challenge at NeurIPS 2024, showcasing cutting‑edge methods for extracting training data from large language models and protecting model privacy with data sanitization and differential privacy techniques.
Recently, the official competition of the 38th NeurIPS (NeurIPS 2024) – the Large Language Model Privacy Challenge (LLM‑PC) – concluded successfully. A joint team of Ant Group’s Morse and Zhejiang University’s Computer Architecture Lab, named “Morse & ARCLab,” won the champion of the attack track and the best practical defense award in the defense track.
NeurIPS is one of the three flagship conferences in machine learning and an A‑class recommendation conference of the China Computer Federation. The winning solutions represent the state‑of‑the‑art industry techniques.
With the rapid adoption of large models across industries, data security and privacy concerns have become increasingly prominent. Throughout training, fine‑tuning, and inference, personal and enterprise‑critical data may be leaked, making privacy protection a critical topic.
The challenge required participants to design innovative solutions either to steal private training data from downstream models (attack track) or to devise privacy‑preserving training methods for large models (defense track). The competition focused on the privacy security of LLM training data, aiming to advance the field toward safer, more reliable AI systems.
Attack Track: The task was to extract privacy from a given large model. The Ant‑Zhejiang team built prompts by querying the target model and selected the candidate response with the lowest loss. Their method achieved a 23.3% attack success rate on the provided Llama‑3.1‑8B model.
Defense Track: The goal was to design a training method that protects data privacy in large models. Their solution combined data sanitization and synthesis techniques to disrupt the model’s memory of training information, reducing the success rate of the organizer’s attack by 30.6% while preserving performance on benchmarks such as MMLU and TruthfulQA.
The technology integrates data sanitization, differential privacy, model obfuscation, and Trusted Execution Environments (TEE), and has already been applied in agricultural scenarios, helping a bank identify target loan‑eligible farmers and improving loan efficiency and satisfaction.
In July, Ant Group’s Morse became one of the first vendors to pass the Trusted Execution Environment product test for large models by the China Academy of Information and Communications Technology, with deployments in banking and securities. The company will continue investing in LLM privacy protection to promote responsible AI adoption in industry.
AntTech
Technology is the core driver of Ant's future creation.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.