Tag

Neural Network Compiler

1 views collected around this technical thread.

Baidu Geek Talk
Baidu Geek Talk
Apr 14, 2025 · Artificial Intelligence

PaddlePaddle Framework 3.0: Five Core Breakthroughs Reshaping Large Model Development

PaddlePaddle Framework 3.0 delivers five breakthroughs—dynamic‑static unified automatic parallelism, integrated training‑inference pipelines, high‑order scientific differentiation, a neural‑network compiler with automatic operator fusion, and streamlined heterogeneous chip adaptation—drastically reducing development effort, boosting training speed, and expanding compatibility for large‑scale AI models.

AI infrastructureLarge Language ModelsModel Inference Optimization
0 likes · 23 min read
PaddlePaddle Framework 3.0: Five Core Breakthroughs Reshaping Large Model Development
Baidu Tech Salon
Baidu Tech Salon
Apr 2, 2025 · Artificial Intelligence

PaddlePaddle Framework 3.0 Released: Five Core Innovations for Large Models and Scientific Computing

PaddlePaddle 3.0, launched on April 1 2025, introduces five core innovations—including dynamic‑static unified automatic parallelism, a training‑inference integrated PIR, high‑order automatic differentiation for scientific computing, a one‑stage CINN compiler, and heterogeneous multi‑chip adaptation—that dramatically reduce distributed‑training code, boost performance up to four‑fold, and extend the framework to aerospace, automotive, meteorology and life‑science applications while remaining fully compatible with the 2.0 API.

Large ModelsNeural Network CompilerPaddlePaddle
0 likes · 21 min read
PaddlePaddle Framework 3.0 Released: Five Core Innovations for Large Models and Scientific Computing
Baidu Tech Salon
Baidu Tech Salon
Aug 20, 2024 · Artificial Intelligence

PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance

The PaddlePaddle Neural Network Compiler (CINN) combines a PIR‑based frontend and a hardware‑specific backend to apply graph‑level optimizations, operator fusion, schedule transformations and automatic tuning, delivering up to 4× faster kernels and 30‑60% overall speed‑ups for deep‑learning and scientific workloads.

CINNGPU optimizationNeural Network Compiler
0 likes · 19 min read
PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance
Baidu Geek Talk
Baidu Geek Talk
Aug 19, 2024 · Artificial Intelligence

PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance Gains

The PaddlePaddle Neural Network Compiler (CINN) combines a PIR‑based frontend that performs graph‑level optimizations such as constant folding, dead‑code elimination and operator fusion with a backend that applies schedule transformations and auto‑tuning, delivering up to 4× faster RMSNorm kernels and 30‑60% overall speed‑ups for generative AI and scientific‑computing workloads.

CINNGPUNeural Network Compiler
0 likes · 18 min read
PaddlePaddle Neural Network Compiler (CINN): Architecture, Optimization Techniques, and Performance Gains