Fundamentals 12 min read

Understanding Intel Xeon Server CPU Naming Rules, Generations, and Architecture

This article explains the Intel Xeon server CPU naming conventions, outlines each generation from Skylake to Sapphire Rapids, and details the internal Mesh interconnect and external UPI bus that enable high‑core counts and multi‑CPU scalability in modern data‑center processors.

Refining Core Development Skills
Refining Core Development Skills
Refining Core Development Skills
Understanding Intel Xeon Server CPU Naming Rules, Generations, and Architecture

Hello, I'm Fei! In the previous two articles we introduced personal desktop CPU model conventions, core design, and key architectural changes; now we turn to server CPUs.

1. Intel server CPU naming rules – Intel’s Xeon brand identifies server processors. The second part indicates the performance tier: Platinum (high‑end), Gold (mid‑range), Silver (entry). The first digit after the tier denotes the generation (1‑Skylake, 2‑Cascade Lake, 3‑Cooper/Ice Lake, 4‑Sapphire Rapids). The SKU number is a stock‑keeping identifier, and the suffix letters describe specific features (C = single socket, Q = liquid‑cooled, N = network/NFV optimized, T = 10‑year life, P = IaaS‑optimized, V = SaaS‑optimized).

2. Server CPU generations – Since 2017 Intel has released four generations of scalable processors: Skylake (14 nm), Cascade Lake (14 nm, 2019), Ice Lake (10 nm, 2021) and Sapphire Rapids (7 nm, 2023). The Xeon Platinum 8260 belongs to the second generation (Cascade Lake, 2019). A table in the original article lists release years, process nodes, and microarchitectures.

3. On‑die Mesh interconnect – The 28‑core Platinum chip uses a 5‑row × 6‑column Mesh layout with two positions occupied by integrated memory controllers (IMC). Each IMC supports three memory channels, each with two DIMMs (up to 12 DIMMs total). Cores are arranged in the Mesh, each containing L1 (32 KiB instruction, 32 KiB data), L2 (1 MiB) caches, and share a unified L3 (LLC). The northbridge region hosts PCIe 3.0 (8 GT/s) and UPI links.

4. Inter‑CPU UPI bus – To scale beyond a single socket, Intel replaces the older QPI with Ultra Path Interconnect (UPI), raising per‑link speed from 9.6 GT/s to 10.4 GT/s while lowering power. Xeon Platinum supports up to three UPI links, enabling dual, quad, or octa‑socket configurations. Diagrams illustrate the physical connections for two, four, and eight CPUs.

Summary – The Intel Xeon Platinum 8260 exemplifies the modern server CPU naming scheme, generational progression, and architectural innovations such as the 2‑D Mesh interconnect for low‑latency memory access and the high‑speed UPI bus for multi‑socket scalability, allowing up to eight CPUs to be linked in a single server.

ServerIntelCPU architectureMesh InterconnectUPIXeon
Refining Core Development Skills
Written by

Refining Core Development Skills

Fei has over 10 years of development experience at Tencent and Sogou. Through this account, he shares his deep insights on performance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.