Tag

adversarial training

1 views collected around this technical thread.

Tencent Cloud Developer
Tencent Cloud Developer
Mar 25, 2025 · Artificial Intelligence

Knowledge Distillation in Diffusion Models: Techniques and Applications

The article explains how knowledge distillation transfers capabilities from large to smaller diffusion models, covering hard and soft labels, temperature scaling, and contrasting it with data distillation, while detailing techniques such as consistency models, progressive distillation, adversarial distillation, and adversarial post‑training for model compression and step reduction.

adversarial post-trainingadversarial trainingconsistency models
0 likes · 19 min read
Knowledge Distillation in Diffusion Models: Techniques and Applications
AntTech
AntTech
Jan 6, 2025 · Artificial Intelligence

2024 Security and Trusted AI Research Highlights from Alibaba, Tsinghua, Zhejiang, and Partner Institutions

This article presents sixteen peer‑reviewed research papers published in top conferences and journals in 2024, covering trusted AI, large‑model applications, network security, adversarial training, deep‑fake detection, secure inference, and related topics from collaborations among Alibaba, Tsinghua, Zhejiang, and other leading institutions.

AI securityDeepfake DetectionLarge Models
0 likes · 27 min read
2024 Security and Trusted AI Research Highlights from Alibaba, Tsinghua, Zhejiang, and Partner Institutions
Alimama Tech
Alimama Tech
Mar 14, 2023 · Artificial Intelligence

Neural Approximate Nearest Neighbor (NANN): Open‑Source Large‑Scale Retrieval with Arbitrary Complex Models

Alibaba’s open‑source Neural Approximate Nearest Neighbor (NANN) library decouples index learning from model training, enabling any TensorFlow‑based deep model to perform high‑throughput, high‑accuracy HNSW‑based retrieval with GPU multi‑streaming, XLA acceleration, graph optimizations, and adversarial training that mitigates L2‑distance mismatch, all supported by ready‑to‑use benchmarks and demos.

Neural ANNTensorFlowadversarial training
0 likes · 7 min read
Neural Approximate Nearest Neighbor (NANN): Open‑Source Large‑Scale Retrieval with Arbitrary Complex Models
DataFunTalk
DataFunTalk
Nov 17, 2022 · Artificial Intelligence

Enhance the Visual Representation via Discrete Adversarial Training

The Alibaba AAIG team proposes Discrete Adversarial Training (DAT), which leverages VQGAN‑based discretization to generate natural‑looking adversarial samples that improve visual representation robustness and transferability across classification, self‑supervised learning, and object detection tasks without sacrificing accuracy, achieving new state‑of‑the‑art results on multiple benchmarks.

Computer Visionadversarial trainingdiscrete adversarial training
0 likes · 12 min read
Enhance the Visual Representation via Discrete Adversarial Training
AntTech
AntTech
Oct 31, 2022 · Artificial Intelligence

Automated Attacker A² for Enhancing Model Robustness in Adversarial Training

The paper presents A², an automated, parameterized attacker that dynamically adjusts perturbation methods and step sizes during adversarial training, demonstrating improved robustness across multiple benchmarks with modest computational overhead, and outlines future directions for further efficiency and effectiveness in secure AI systems.

NeurIPSadversarial trainingautomated attacker
0 likes · 9 min read
Automated Attacker A² for Enhancing Model Robustness in Adversarial Training
Rare Earth Juejin Tech Community
Rare Earth Juejin Tech Community
Oct 11, 2022 · Artificial Intelligence

GANomaly: Theory and Source Code Analysis

This article explains the GANomaly model for semi‑supervised anomaly detection, detailing its generator‑encoder‑discriminator architecture, loss functions, testing phase scoring, and provides annotated PyTorch source code to help readers implement and understand the approach.

Anomaly DetectionEncoder-DecoderGAN
0 likes · 15 min read
GANomaly: Theory and Source Code Analysis
DataFunTalk
DataFunTalk
Aug 10, 2021 · Artificial Intelligence

Practical Deep Learning Tricks: Cyclic LR, Flooding, Warmup, RAdam, Adversarial Training, Focal Loss, Dropout, Normalization, ReLU, Group Normalization, Label Smoothing, Wasserstein GAN, Skip Connections, Weight Initialization

This article presents a concise collection of practical deep‑learning techniques—including cyclic learning‑rate, flooding, warmup, RAdam, adversarial training, focal loss, dropout, various normalization methods, ReLU, group normalization, label smoothing, Wasserstein GAN, skip connections, and weight initialization—along with code snippets and references for implementation.

GANadversarial trainingdeep learning
0 likes · 8 min read
Practical Deep Learning Tricks: Cyclic LR, Flooding, Warmup, RAdam, Adversarial Training, Focal Loss, Dropout, Normalization, ReLU, Group Normalization, Label Smoothing, Wasserstein GAN, Skip Connections, Weight Initialization
58 Tech
58 Tech
Jun 16, 2021 · Artificial Intelligence

Improving Text Matching Accuracy in Voice Assistants: Experiments with Siamese Networks, BERT Models, and Advanced Tricks

This article evaluates classic Siamese networks, various BERT‑based pretrained models, and several training tricks such as adversarial training, k‑fold cross‑validation, and model ensembling on both a public similarity‑sentence competition dataset and an internal voice‑assistant standard question matching dataset, ultimately raising accuracy from 97.23 % to 99.5 %.

BERTSiamese networkText Matching
0 likes · 15 min read
Improving Text Matching Accuracy in Voice Assistants: Experiments with Siamese Networks, BERT Models, and Advanced Tricks
DataFunSummit
DataFunSummit
Mar 30, 2021 · Artificial Intelligence

Chinese Short‑Text Entity Linking: Model Design, Multitask Learning, and Experimental Results on the Qianyan Dataset

This article presents a comprehensive approach to Chinese short‑text entity linking, describing the Qianyan dataset, pipeline and end‑to‑end task formulations, sample construction, a multitask model that jointly performs entity ranking and NIL classification, various optimization techniques including confidence learning and adversarial training, and detailed experimental analysis showing state‑of‑the‑art performance.

Chinese NLPadversarial trainingconfidence learning
0 likes · 13 min read
Chinese Short‑Text Entity Linking: Model Design, Multitask Learning, and Experimental Results on the Qianyan Dataset
DataFunTalk
DataFunTalk
Feb 24, 2020 · Artificial Intelligence

Adversarial Training for Transformer‑Based Natural Language Models: Methods, Variants, and Experimental Results

This presentation reviews adversarial training techniques for transformer‑based NLP models, covering the motivation, image‑based and text‑based attack generation, standard PGD, its variants FreeAT and YOPO, the proposed FreeLB method, extensive GLUE experiments, and conclusions about robustness and future directions.

FreeLBNLPTransformer
0 likes · 18 min read
Adversarial Training for Transformer‑Based Natural Language Models: Methods, Variants, and Experimental Results