Tag

Training Acceleration

1 views collected around this technical thread.

360 Tech Engineering
360 Tech Engineering
Oct 15, 2024 · Artificial Intelligence

Implementation and Optimization of 360 AI Compute Center: Infrastructure, Network, Kubernetes, and Training/Inference Acceleration

The article details the design and deployment of 360's AI Compute Center, covering GPU server selection, high‑performance networking, Kubernetes‑based cluster management, advanced scheduling, training and inference acceleration techniques, and a comprehensive AI development platform with visualization and fault‑tolerance features.

AI infrastructureGPU ClusterInference Acceleration
0 likes · 21 min read
Implementation and Optimization of 360 AI Compute Center: Infrastructure, Network, Kubernetes, and Training/Inference Acceleration
58 Tech
58 Tech
Jun 3, 2024 · Artificial Intelligence

Parameter-Efficient Fine-Tuning (PEFT) Methods for Large Language Models: LoRA, QLoRA, AdaLoRA, SoRA, and Training Acceleration with Unsloth

This article systematically analyzes popular parameter‑efficient fine‑tuning (PEFT) techniques for large language models—including Adapter Tuning, Prefix Tuning, LoRA, QLoRA, AdaLoRA, and SoRA—detailing their principles, implementation code, experimental results on NLU tasks, and practical acceleration using the Unsloth library.

AdaLoRALarge Language ModelsLoRA
0 likes · 39 min read
Parameter-Efficient Fine-Tuning (PEFT) Methods for Large Language Models: LoRA, QLoRA, AdaLoRA, SoRA, and Training Acceleration with Unsloth
Tencent Cloud Developer
Tencent Cloud Developer
May 25, 2023 · Artificial Intelligence

QQGC: A Two-Stage Text-to-Image Model with Prior and Decoder Architectures for Efficient AI Painting

QQGC, Tencent’s two‑stage text‑to‑image model that separates CLIP‑based Prior mapping from a Stable Diffusion Decoder, leverages T5‑enhanced text embeddings and a suite of efficiency tricks—including FP16, flash attention, ZeRO and GPU‑RDMA—to train over‑2 B‑parameter models on 64 GPUs, achieving state‑of‑the‑art FID and CLIP scores while supporting image variation, semantic img2img, precise CLIP‑vector edits and unsafe‑content filtering, and now powers the company’s Magic Painting Room.

AI PaintingCLIP embeddingTraining Acceleration
0 likes · 12 min read
QQGC: A Two-Stage Text-to-Image Model with Prior and Decoder Architectures for Efficient AI Painting