Fundamentals 6 min read

How Automatic Quantization Slashes Memory Use in High‑Resolution Physical Simulations

This article explains how researchers applied quantization techniques to high‑resolution physical simulations, enabling over 50% memory reduction without noticeable visual loss, by modeling error propagation, using constrained optimization, and introducing dithering, with results demonstrated on GPU‑based smoke, fluid, and elastic body simulations.

Kuaishou Large Model
Kuaishou Large Model
Kuaishou Large Model
How Automatic Quantization Slashes Memory Use in High‑Resolution Physical Simulations

In Hollywood visual effects, high‑resolution physical simulation provides stunning visuals but demands huge memory; for example, the 300‑meter dam scene in "Frozen" used 90 million particles and required about 96 GB of GPU memory.

To reduce storage, researchers introduced quantization into simulation algorithms, using fewer bits to represent physical variables. The SIGGRAPH 2021 paper “QuanTaichi” presented a compiler‑supported quantized type system that allowed lower‑bit representation while keeping GPU memory constant, though it required manual design of quantization schemes.

At SIGGRAPH 2022, a team from Zhejiang University, Kuaishou, and the University of Utah proposed an automatic quantization method for physical simulation, achieving over 50% memory savings without noticeable visual degradation, greatly improving usability and productivity.

The core idea models quantization‑induced precision loss as error propagation, formulates automatic quantization as a constrained optimization problem based on uncertainty propagation theory, and quickly yields analytic solutions as feasible quantization schemes.

The study also introduces dithering as a rounding method instead of traditional rounding, reducing correlation of data errors; with few bits, dithering noticeably lowers precision loss, as demonstrated by aligned falling letters in the simulation.

Results include a quantized smoke simulation on an Euler grid achieving 1.93× memory compression on a single Nvidia RTX 3090 with over 228 million cells, an elastic body simulation with 2.01× compression on over 295 million particles, and fluid simulations with up to 2.02× compression on 400 million particles.

Reference: Yuanming Hu, Jiafeng Liu, Xuanda Yang, Mingkuan Xu, Ye Kuang, Weiwei Xu, Qiang Dai, William T. Freeman, and Frédo Durand. 2021. QuanTaichi: A Compiler for Quantized Simulations. ACM Transactions on Graphics (SIGGRAPH 2021).

quantizationcomputer graphicsphysical simulationGPU memory optimizationSIGGRAPH
Kuaishou Large Model
Written by

Kuaishou Large Model

Official Kuaishou Account

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.