Tag

sparse models

1 views collected around this technical thread.

Architect
Architect
Feb 10, 2025 · Artificial Intelligence

Evolution of DeepSeek Mixture‑of‑Experts (MoE) Architecture from V1 to V3

This article reviews the development of DeepSeek's Mixture-of-Experts (MoE) models, tracing their evolution from the original DeepSeekMoE V1 through V2 to V3, detailing architectural innovations such as fine‑grained expert segmentation, shared‑expert isolation, load‑balancing losses, device‑limited routing, and the shift from softmax to sigmoid gating.

DeepSeekLLMLoad Balancing
0 likes · 21 min read
Evolution of DeepSeek Mixture‑of‑Experts (MoE) Architecture from V1 to V3
JD Retail Technology
JD Retail Technology
Aug 30, 2024 · Artificial Intelligence

GPU Optimization Practices for Training and Inference in JD Advertising Recommendation Systems

The article details JD Advertising's technical challenges and solutions for large‑scale sparse recommendation models, describing GPU‑focused storage, compute and I/O optimizations for both training and low‑latency inference, including distributed pipelines, heterogeneous deployment, batch aggregation, multi‑stream execution, and compiler extensions.

Distributed SystemsGPU optimizationInference
0 likes · 13 min read
GPU Optimization Practices for Training and Inference in JD Advertising Recommendation Systems
DataFunSummit
DataFunSummit
Mar 12, 2023 · Artificial Intelligence

PaddleBox and FeaBox: GPU‑Based Large‑Scale Sparse Model Training and Integrated Feature Extraction Frameworks at Baidu

The article introduces PaddleBox and FeaBox, two GPU‑driven frameworks designed for massive sparse DNN training and unified feature extraction, detailing their architecture, performance advantages, hardware‑software co‑design challenges, and successful deployment across Baidu's advertising systems.

AI infrastructureFeaBoxFeature Extraction
0 likes · 24 min read
PaddleBox and FeaBox: GPU‑Based Large‑Scale Sparse Model Training and Integrated Feature Extraction Frameworks at Baidu
DataFunTalk
DataFunTalk
Apr 17, 2022 · Artificial Intelligence

DeepRec: Alibaba’s Sparse Model Training Engine – Architecture, Features, and Open‑Source Status

DeepRec, developed since 2016 by Alibaba, is a specialized sparse‑model training engine that addresses feature elasticity, training performance, and deployment challenges through dynamic elastic features, optimized runtimes, distributed training frameworks, incremental model export, and multi‑level storage, and is now being open‑sourced for broader industry collaboration.

AI infrastructureDeepRecFeature Engineering
0 likes · 15 min read
DeepRec: Alibaba’s Sparse Model Training Engine – Architecture, Features, and Open‑Source Status