Tag

HBM

1 views collected around this technical thread.

Architects' Tech Alliance
Architects' Tech Alliance
Aug 13, 2024 · Fundamentals

Understanding High Bandwidth Memory (HBM): Architecture, Benefits, and Applications

High Bandwidth Memory (HBM) is a DRAM technology that uses stacked chips, TSV, and micro‑bump interconnects to deliver ultra‑high data rates, lower power consumption, and compact form factor, addressing the bandwidth, latency, power, space, thermal, and complexity challenges of traditional 2D memory in GPUs, AI, HPC, and data‑center workloads.

Artificial IntelligenceHBMHigh Bandwidth Memory
0 likes · 10 min read
Understanding High Bandwidth Memory (HBM): Architecture, Benefits, and Applications
Architects' Tech Alliance
Architects' Tech Alliance
Jun 19, 2024 · Fundamentals

Overview of High Bandwidth Memory (HBM) Technology and Market Trends

The article explains the evolution, technical specifications, and packaging methods of High Bandwidth Memory (HBM) from HBM1 to HBM3E, highlights its dominant role in AI servers, and analyzes market share and growth forecasts for HBM products through 2026.

AI ServersHBMHigh Bandwidth Memory
0 likes · 8 min read
Overview of High Bandwidth Memory (HBM) Technology and Market Trends
Architects' Tech Alliance
Architects' Tech Alliance
May 14, 2024 · Fundamentals

Fundamentals of GPU Computing: PCIe, NVLink, NVSwitch, and HBM

This article provides a comprehensive overview of the core components and terminology of large‑scale GPU computing, covering GPU server architecture, PCIe interconnects, NVLink generations, NVSwitch, high‑bandwidth memory (HBM), and bandwidth unit considerations for AI and HPC workloads.

AI hardwareGPU computingHBM
0 likes · 11 min read
Fundamentals of GPU Computing: PCIe, NVLink, NVSwitch, and HBM
Architects' Tech Alliance
Architects' Tech Alliance
Oct 6, 2023 · Fundamentals

High Bandwidth Memory (HBM) Technology Overview and Its Integration in Modern Processors

High Bandwidth Memory (HBM), introduced in 2014 using TSV stacking, has evolved through HBM2, HBM2e, and HBM3 standards and is now integrated into CPUs, GPUs, and accelerators from AMD, NVIDIA, Intel, and others, with advanced interconnects like CoWoS, EMIB, and Foveros enabling high‑capacity, high‑bandwidth packaging.

CPUChipletGPU
0 likes · 16 min read
High Bandwidth Memory (HBM) Technology Overview and Its Integration in Modern Processors
Architects' Tech Alliance
Architects' Tech Alliance
Sep 14, 2023 · Fundamentals

Deep Report: Opportunities in Memory Interface Chips and DDR5 Evolution

This report analyzes the role of memory interface chips in modern servers, the transition from DDR4 to DDR5, market penetration forecasts, the technical distinctions between RCD and DB buffers, and emerging standards such as CXL, HBM, and PCIe that shape future high‑performance computing architectures.

CXLDDR5HBM
0 likes · 10 min read
Deep Report: Opportunities in Memory Interface Chips and DDR5 Evolution
DataFunSummit
DataFunSummit
Feb 15, 2023 · Artificial Intelligence

ChatGPT Boom Fuels Surge in AI Chip Demand, Boosting Nvidia, Samsung, and SK Hynix

The explosive growth of ChatGPT and other AI chatbots is driving unprecedented demand for high‑performance AI chips and high‑bandwidth memory, positioning Nvidia as the primary beneficiary while also creating significant market opportunities for Samsung, SK Hynix, and other semiconductor manufacturers.

AI chipsAI hardwareChatGPT
0 likes · 11 min read
ChatGPT Boom Fuels Surge in AI Chip Demand, Boosting Nvidia, Samsung, and SK Hynix
Tencent Architect
Tencent Architect
Nov 9, 2017 · Artificial Intelligence

Why General‑Purpose CPUs Are Inefficient for Deep Learning: Heterogeneous Computing and AI Processor Design

The article analyzes the limitations of general‑purpose CPUs for deep‑learning workloads, explains how semiconductor scaling and memory‑bandwidth constraints drive the shift toward specialized heterogeneous processors such as GPUs, FPGAs, and ASICs, and discusses the design trade‑offs of embedded versus cloud AI accelerators.

AIASICCPU
0 likes · 13 min read
Why General‑Purpose CPUs Are Inefficient for Deep Learning: Heterogeneous Computing and AI Processor Design