Survey of General CPU Performance Benchmarking and Emerging Trends (2023)
This article reviews the evolution of mainstream CPU performance benchmarks such as SPEC and TPC, compares their methodologies and tools, discusses challenges in evaluating heterogeneous CPUs, and outlines future research directions, providing a comprehensive overview for researchers and practitioners.
The article presents a comprehensive survey of general CPU performance benchmarking, drawing from the 2023 study "General CPU Performance Benchmarking Research Overview". It examines the rapid development of CPU technologies and the continuous evolution of performance benchmarks, focusing on both X86 and emerging architectures like ARM, RISC‑V, Alpha, and MIPS.
1. Common Performance Benchmarks
SPEC Benchmarks – Introduced in 1988, SPEC has expanded to cover CPU, server energy efficiency, file system, HPC, and web application performance. The SPEC CPU series (92, 95, 2000, 2006, 2017) have evolved to address multi‑core processors, offering both speed (integer and floating‑point) and rate (throughput) measurements, and increasingly reflect real‑world workload characteristics.
TPC Benchmarks – Unlike SPEC, TPC emphasizes system‑level performance. It includes OLTP tests (TPC‑C, TPC‑E), decision‑support and big‑data tests (TPC‑H, TPC‑DS), and virtualization tests (TPC‑VMS). These benchmarks simulate complex transaction processing and data‑analysis scenarios, guiding CPU and system design for cloud‑centric workloads.
2. Other Benchmarks
Additional tools such as Geekbench, Splash, PARSEC, LINPACK, MiBench, NAS Parallel Benchmark, and CPU‑Z are discussed, highlighting their cross‑platform capabilities and suitability for evaluating single‑core, multi‑core, vector, and memory‑intensive workloads.
3. Comparative Analysis
The paper compares benchmark characteristics, supported languages, compilers, operating systems, and target CPU architectures. It notes that relying solely on raw CPU performance is insufficient; a combination of diverse benchmarks is necessary to capture comprehensive performance across varied application domains.
4. Challenges and Future Directions
Key challenges include assessing heterogeneous CPUs, ensuring test stability, and improving measurement accuracy (e.g., cache‑misses, branch‑misses, dTLB‑load‑misses). The article suggests integrating multi‑benchmark approaches, refining models for pre‑set corrections, and expanding benchmarks to better support emerging workloads such as AI and cloud computing.
Overall, the survey synthesizes the state‑of‑the‑art in CPU benchmarking, outlines recent advancements, and proposes research avenues to enhance benchmark relevance for modern and future computing architectures.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.