Databases 12 min read

Optimizing MySQL Pagination with LIMIT: Methods, Experiments, and Index Strategies

This article examines the performance drawbacks of MySQL's LIMIT pagination on large tables, presents six practical query methods—including direct LIMIT, primary‑key indexing, index‑based ordering, prepared statements, covering indexes, and sub‑query/join techniques—provides extensive benchmark results, and offers concrete indexing recommendations to achieve fast, stable pagination even with millions of rows.

Architecture Digest
Architecture Digest
Architecture Digest
Optimizing MySQL Pagination with LIMIT: Methods, Experiments, and Index Strategies

The article begins by highlighting the inefficiency of using plain SELECT * FROM table LIMIT M,N for large datasets, noting full‑table scans and unstable result ordering.

Method 1 uses the basic LIMIT syntax, suitable only for small tables (hundreds to thousands of rows), but suffers from slow performance on larger tables.

Method 2 creates a primary key or unique index and queries with SELECT * FROM table WHERE id_pk > (pageNum*10) LIMIT M , which leverages index scans and works well for tables with tens of thousands of rows.

Method 3 adds an ORDER BY on the indexed primary key: SELECT * FROM table WHERE id_pk > (pageNum*10) ORDER BY id_pk ASC LIMIT M , providing fast results for large tables while keeping the result set stable.

Method 4 employs prepared statements with placeholders for page number and page size: PREPARE stmt_name FROM SELECT * FROM table WHERE id_pk > (?* ?) ORDER BY id_pk ASC LIMIT M , which is efficient for very large datasets.

Method 5 uses MySQL's ability to order by an indexed column and limit the result set, e.g., SELECT * FROM your_table WHERE pk>=1000 ORDER BY pk ASC LIMIT 0,20 , dramatically reducing query time when only the indexed column is selected.

Method 6 combines sub‑queries or joins with indexes to locate the desired rows before fetching full records, such as:

SELECT * FROM your_table WHERE id <= (SELECT id FROM your_table ORDER BY id DESC LIMIT (page-1)*pagesize, pagesize) LIMIT pagesize

and

SELECT * FROM your_table AS t1 JOIN (SELECT id FROM your_table ORDER BY id DESC LIMIT (page-1)*pagesize) AS t2 WHERE t1.id <= t2.id ORDER BY t1.id DESC LIMIT pagesize

Experimental results show that query time grows linearly with the offset in a plain LIMIT query (e.g., 0.016 s for offset 10, 0.094 s for offset 10 000, 3.229 s for offset 400 000, and 37.44 s for offset 866 613). Using covering indexes (selecting only the indexed column) reduces the same large‑offset query to about 0.2 s, a >100× speedup.

Further tests demonstrate that adding a WHERE clause on a non‑leading indexed column (e.g., WHERE vtype=1 ) can cause the optimizer to ignore the index, leading to slow queries; rearranging the composite index to place the filter column first restores index usage and brings execution time down to milliseconds.

The key takeaway is that for paginated queries with WHERE conditions, the index should be designed with the filter column first and the primary‑key column second, and the SELECT list should include only the indexed columns to enable index‑only scans.

By applying these strategies, pagination on tables with millions of rows can be performed in sub‑second times, making MySQL suitable for high‑traffic applications.

performanceIndexingQuery OptimizationMySQLpaginationlimitlarge-data
Architecture Digest
Written by

Architecture Digest

Focusing on Java backend development, covering application architecture from top-tier internet companies (high availability, high performance, high stability), big data, machine learning, Java architecture, and other popular fields.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.