Artificial Intelligence 17 min read

Computing‑in‑Memory (CiM) Technology: Concepts, History, Advantages, Classifications and Application Scenarios

This article provides a comprehensive overview of Computing‑in‑Memory technology, covering its definition, historical evolution, performance advantages over traditional von Neumann architectures, various technical classifications, storage‑media choices, market drivers, and its pivotal role in AI and big‑data workloads across edge, cloud and automotive domains.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Computing‑in‑Memory (CiM) Technology: Concepts, History, Advantages, Classifications and Application Scenarios

Computing‑in‑Memory (CiM), also called compute‑in‑memory or compute‑near‑memory, embeds arithmetic capability directly inside memory cells, enabling two‑dimensional and three‑dimensional matrix multiply‑add operations without moving data between separate storage and processing units.

The concept dates back to the 1970s, but the rapid growth of cloud computing and AI has exposed the “storage wall”, “bandwidth wall” and “power wall” caused by the growing imbalance between processor speed (≈55% annual improvement) and memory speed (≈10% annual improvement). Data movement can consume 60‑90% of total energy, making traditional von Neumann systems a critical bottleneck for modern AI workloads.

CiM eliminates unnecessary data transfers, dramatically reduces latency and power consumption, and can increase effective compute density by hundreds to thousands of times, delivering >1000 TOPS and >10‑100 TOPS/W in specialized AI accelerators.

Key milestones include the 1969 Stanford proposal of a CiM computer, the 2010 HP memristor Boolean logic demo, the 2016 UCSB PRIME RRAM‑based deep‑learning accelerator (≈20× lower power, ≈50× faster), and the 2017 Micro conference where major vendors (NVIDIA, Intel, Microsoft, Samsung, UCSB) showcased prototype CiM systems.

CiM architectures are commonly classified as:

Processing‑with‑Memory (lookup‑table based computation, early GPU‑style approach).

Near‑Memory Computing (separate compute module placed close to memory, e.g., AMD Zen CPUs, HBM‑PIM).

In‑Memory Computing (logic embedded inside SRAM/RRAM/DRAM cells, used by Mythic, Qianxin, etc.).

Logic‑in‑Memory (latest generation where dedicated compute logic is co‑located with storage, demonstrated by TSMC 2021 ISSCC and Qianxin).

The market is driven by AI and metaverse workloads that demand massive parallelism and energy efficiency. Data‑center‑scale exascale projects highlight the need for CiM to cut the >50% power overhead of data movement, while edge and automotive applications benefit from the low‑latency, low‑power characteristics of on‑chip compute.

Various storage media are employed in CiM designs: digital SRAM/DRAM for high‑performance, analog RRAM/MRAM/Flash for high density, each with trade‑offs in speed, energy, reliability and manufacturing maturity. SRAM offers the best speed and noise immunity, while emerging NVRAMs (RRAM, MRAM, PCAM) promise higher density and non‑volatility.

Application scenarios span AI inference in wearables, smart‑driving, data‑center accelerators, and brain‑inspired neuromorphic processors. For edge devices, CiM can contribute ~30% of the overall performance advantage; for cloud and edge high‑performance servers, its impact rises to ~90%.

In summary, CiM is recognized by academia and industry as a next‑generation computing paradigm—often described as the “third pole” of compute architecture after CPUs and GPUs—poised to reshape AI, big‑data, and brain‑inspired computing in the coming years.

Big DataAI accelerationMemory Architecturecomputing-in-memorydigital vs analogstorage technologies
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.