Understanding Serial and Parallel Computing and Their Speedup Limits
This article explains the concepts of serial and parallel computing, describes how parallelism divides tasks across multiple processors, discusses speedup metrics and Amdahl's law, and highlights practical limits of multi‑core performance in modern computer architectures.
Serial Computing: A problem is broken down into a sequence of instructions that are executed one after another on a single processor, with only one instruction running at any given time.
Parallel Computing: A problem is divided into multiple parts that can be executed simultaneously; each part is further broken into instructions that run concurrently on different processors.
In a computing cluster, resources include multi‑processor or multi‑core machines, or any number of interconnected computers, such as typical parallel processing systems where each message can be handled in parallel.
The primary goal of parallel processing is to reduce execution time, measured by speedup (linear scaling) and expressed by Amdahl's law: the theoretical speedup on n processors is limited by the fraction of the program that can be parallelized.
In practice, perfect linear speedup is rare; achieving 85% linearity on more than four cores is considered excellent. Real‑world performance often falls short of the ideal n‑fold increase due to non‑parallelizable portions.
The article emphasizes evaluating whether a component can fully exploit K cores, asking if performance can increase proportionally with the number of cores.
It also references a lecture by He Wangquan on multi‑core compilation optimization, noting that the material, while older, provides a comprehensive overview of parallel compilation techniques.
Numerous illustrative images accompany the text, and readers are directed to additional resources, including a PDF of a broader architecture knowledge summary and a promotional e‑book collection on architecture‑related technologies.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.