Federated Learning and Data Security in the Era of Large Models: Research Overview and the FLAIR Platform
This presentation reviews recent research on data security and utilization in the large‑model era, covering privacy‑preserving federated learning, knowledge‑transfer techniques, prototype‑based modeling, multi‑model fusion methods such as FuseGen, and introduces the federated knowledge computing platform FLAIR for both horizontal and vertical federated scenarios.
The talk, delivered by a Tsinghua University PhD student, introduces challenges of data security and utilization when large models become mainstream, emphasizing the need to protect private data while leveraging its value.
It outlines three main parts: (1) data security and usage issues in the large‑model era, (2) privacy‑preserving computation and federated knowledge transfer technologies, and (3) the federated knowledge computing platform FLAIR.
Key challenges discussed include the high cost of collecting high‑quality training data, the risk of data leakage when sending data to remote large models, and the tension between model privacy and the resource demands of large‑scale models.
The speaker reviews privacy‑preserving techniques such as homomorphic encryption, secure multi‑party computation, and differential privacy, and then details federated learning paradigms (cross‑device, cross‑silo, horizontal, vertical, and federated transfer learning) with a focus on knowledge‑transfer approaches like logit sharing, representation sharing, and prototype aggregation.
Recent research contributions are highlighted: FedHKT (INFOCOM 2023) uses logit‑based knowledge distillation for federated learning; CreamFL (ICLR 2023) enables cross‑modal federated learning via representation exchange; prototype‑based methods improve class separation; FuseGen fuses multiple large‑model outputs to generate high‑quality synthetic data without relying on a single model.
The FLAIR platform, evolved from VFLAIR, supports vertical federated learning with model splitting, horizontal federated learning with large language models (e.g., ChatGLM‑6B, Bloom‑7B), and cross‑modal tasks, offering extensive attack/defense benchmarks and a risk‑assessment metric.
FLAIR has been deployed in real medical institutions (e.g., Chang‑Gung Hospital) and provides open‑source code for the research community.
DataFunTalk
Dedicated to sharing and discussing big data and AI technology applications, aiming to empower a million data scientists. Regularly hosts live tech talks and curates articles on big data, recommendation/search algorithms, advertising algorithms, NLP, intelligent risk control, autonomous driving, and machine learning/deep learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.