Compression as a Measure of Intelligence in Large Language Models
The article argues that a large language model's ability to compress data through next‑token prediction reflects its intelligence, reviews theoretical and empirical evidence linking compression efficiency to model scale, and proposes a circuit‑competition framework to explain emergent capabilities, in‑context learning, and fine‑tuning effects.