Game Development 12 min read

Understanding Rasterization: From 3D Models to Screen Pixels

This article explains the rasterization pipeline of GPU‑based 3D rendering, covering model setup, orthographic and perspective projection, viewport transformation, triangle rasterization, bounding‑box optimization, depth buffering, and final pixel shading, using a simple two‑triangle example to illustrate each step.

ByteDance SYS Tech
ByteDance SYS Tech
ByteDance SYS Tech
Understanding Rasterization: From 3D Models to Screen Pixels

Preface

With the development of technology, GPU‑based rendering is widely used in 3D animation and games. This article focuses on rasterization, describing the rendering process of a simple scene to help readers understand the principles of rasterization.

Rasterization Process

Rasterization performs two main functions: projecting geometric primitives (triangles or polygons) onto the screen and decomposing the projected primitives into fragments.

Model Setup

We use a simple model consisting of two triangles in 3D space. Triangle 1 vertices: (-2,0,-2), (2,0,-2), (0,2,-2). Triangle 2 vertices: (-1,0.5,-20), (2.5,1,-20), (3,-1.5,-20). The camera is placed at (0,0,0) looking toward the negative z‑axis.

Orthographic and Perspective Projection

Perspective projection produces a near‑large, far‑small effect, while orthographic projection does not. The article illustrates both projections and explains that perspective projection transforms a view frustum into a normalized cube [-1,1]^3 before projecting onto the xy‑plane.

Rasterization & Shading

After perspective projection and normalization, the standard cube is mapped to screen coordinates via viewport transformation. Rasterization determines which pixels lie inside each triangle using a bounding‑box and edge‑function (cross‑product) test. For pixels inside a triangle, shading is performed using interpolated texture coordinates and normals derived from barycentric coordinates computed before perspective division.

Depth Buffer

Depth buffering resolves visibility when multiple fragments map to the same pixel. Each fragment stores its depth value; when a new fragment is processed, its depth is compared to the stored value, and the closer fragment overwrites the color.

Summary

The graphics pipeline consists of Vertex Processing (including perspective projection), Rasterization (projecting to the screen and determining covered pixels), Fragment Processing (shading using interpolated attributes), and Blending (writing the final color to the framebuffer).

Vertex Processing

Rasterization

Fragment Processing

Blending

GPU renderingcomputer graphicsdepth bufferperspective projectionrasterization
ByteDance SYS Tech
Written by

ByteDance SYS Tech

Focused on system technology, sharing cutting‑edge developments, innovation and practice, and analysis of industry tech hotspots.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.