Databases 10 min read

Case Study: Resolving Deep Pagination Slow Queries in MySQL

This article walks through a real‑world incident caused by deep pagination in a MySQL‑based system, detailing the timeline of symptoms, root‑cause analysis, and a series of optimizations—including ID‑based queries, range scans, and rolling cursor techniques—that ultimately restored stable performance.

Architecture Digest
Architecture Digest
Architecture Digest
Case Study: Resolving Deep Pagination Slow Queries in MySQL

1. Incident Background

At 16:00 a colleague reported a drop in the availability of the pagination query in the Fusion system. Monitoring showed a sudden spike in TP99 latency and CPU usage on several machines, despite low overall CPU utilization.

2. Problem Timeline

16:05 – Interface UMP monitoring revealed abnormal TP99.

16:10 – Database CPU surged;大量慢SQL被捕获。

16:15 – Logs indicated most slow SQLs targeted the pagination interface of a specific merchant (≈10 000 rows, ~10% of total).

16:25 – Deep pagination identified as the root cause; initial mitigation involved rate‑limiting and pausing MQ consumption.

17:40 – Optimized SQL deployed, but MQ backlog caused another CPU spike.

18:00 – Decision to develop a new cursor‑based pagination interface.

22:20 – New interface launched, MQ consumption resumed, and performance stabilized.

3. Cause Analysis and Solutions

3.1 Why Deep Pagination Fails

MySQL’s LIMIT offset, count reads and discards the first offset rows, so larger offsets cause increasingly expensive scans.

select * from table where org_code = xxxx limit 1000, 100

3.2 Solution Approaches

ID‑Based Query : First fetch IDs, then query by those IDs.

Range Query on IDs : Use the min/max IDs to replace the IN list.

Subquery Join : Join the ID subquery with the main table.

Rolling Cursor (Scroll Query) : Each request returns the maximum ID as a cursor; the next request uses WHERE id > cursor LIMIT n .

The team ultimately chose the rolling cursor method because it offers simple server‑side implementation and good performance, assuming IDs are auto‑increment and results are ordered by ID.

4. Final Optimized SQL

SELECT id, dj_org_code, dj_sku_id, jd_sku_id, yn FROM table WHERE org_code = xxxx AND id > 0 ORDER BY id ASC LIMIT 500;

After deployment, the system stabilized, but a week later another wave of slow queries appeared. Explanation: MySQL sometimes prefers the primary‑key index over a secondary index when LIMIT is small relative to the result set, leading to full‑table scans.

Force‑index can override this, but hard‑coding index names introduces maintenance risks.

SELECT id, dj_org_code, dj_sku_id, jd_sku_id, yn FROM table FORCE INDEX(idx_upc) WHERE org_code = xxxx AND id > 0 ORDER BY id ASC LIMIT 500;

5. Summary and Lessons

Implement rate‑limiting and circuit‑breaker mechanisms to protect downstream services.

Do not ignore potential performance issues even if they appear minor under low load.

Thoroughly test SQL changes across different data distributions and edge cases.

SQLMySQLdeep paginationIndex OptimizationDatabase performanceslow query
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.