Backend Development 16 min read

When Pagination Breaks: Real‑World Bugs and Practical Fixes

This article recounts three production‑level pagination bugs encountered in a payment system, explains why they occurred—ranging from naive batch processing to MySQL's nondeterministic LIMIT behavior—and shows how proper pagination, ThreadLocal cleanup, and deterministic ordering resolve the issues.

Sanyou's Java Diary
Sanyou's Java Diary
Sanyou's Java Diary
When Pagination Breaks: Real‑World Bugs and Practical Fixes

Hello, I'm Sanyou. Recently I ran into a puzzling production bug that wasn't really a bug but a misunderstanding of business requirements.

select * from table order by priority limit 1;

"priority" means the smaller the number, the higher the priority. Ordering by priority and limiting to one row exposed a hidden issue in our order‑processing task.

My First Production Bug

We had a scheduled job that fetched orders with status 0 (initial), called an external service to get the current status, and updated the order.

<code>// fetch orders with status = 0
ArrayList initOrderInfoList = queryInitOrderInfoList();
for (OrderInfo orderInfo : initOrderInfoList) {
    try {
        String orderStatus = queryOrderStatus(orderInfo.getOrderId);
        updateOrderInfo(orderInfo.getOrderId, orderStatus);
    } catch (Exception e) {
        // log exception
    }
}
</code>

The code works for small data sets, but when the number of pending orders grows, loading all rows at once can exhaust memory. I switched to batch fetching 100 rows at a time:

select * from order where order_status=0 order by create_time limit 100;
<code>while (true) {
    // fetch 100 rows
    ArrayList initOrderInfoList = queryInitOrderInfoList();
    for (OrderInfo orderInfo : initOrderInfoList) {
        try {
            String orderStatus = queryOrderStatus(orderInfo.getOrderId);
            updateOrderInfo(orderInfo.getOrderId, orderStatus);
        } catch (Exception e) {
            // log exception
        }
    }
}
</code>

This introduced a potential infinite loop: if any row stayed in the initial state (e.g., due to an RPC exception), the while(true) never exits.

To fix it, I replaced the batch loop with proper pagination using the MyBatis PageHelper plugin:

<code>// paginate until the last page
while (!pageInfo.isLastPage) {
    pageNum = pageNum + 1;
    PageHelper.startPage(pageNum, 100);
    ArrayList initOrderInfoList = queryInitOrderInfoList();
    pageInfo = new PageInfo(initOrderInfoList);
    for (OrderInfo orderInfo : initOrderInfoList) {
        try {
            String orderStatus = queryOrderStatus(orderInfo.getOrderId);
            updateOrderInfo(orderInfo.getOrderId, orderStatus);
        } catch (Exception e) {
            // log exception
        }
    }
}
</code>

However, when the total number of rows changes between pages (e.g., some rows are updated to a different status), the pagination can skip data because the offset no longer points to the correct next batch.

My workaround was to paginate backwards, always fetching the last page first, which guarantees no rows are missed even if statuses change.

Another Pagination Pitfall with PageHelper

Initially I wrote:

<code>PageHelper.startPage(pageNum, 100);
List&lt;OrderInfo&gt; list = orderInfoMapper.select(param1);
</code>

Later I added a null‑check around the query:

<code>PageHelper.startPage(pageNum, 100);
if (param != null) {
    List&lt;OrderInfo&gt; list = orderInfoMapper.select(param);
}
</code>

If param became null , the pagination parameters remained in the thread‑local storage, contaminating subsequent queries and causing unexpected paging behavior.

The correct pattern is to call PageHelper.startPage immediately before the actual MyBatis query, inside the same conditional block, ensuring the thread‑local is cleared after the query.

Data Loss Due to Nondeterministic ORDER BY

In another case, a page displayed duplicate rows because many orders were inserted with the same create_time . MySQL’s optimization for LIMIT combined with a non‑unique order column makes the result order nondeterministic:

select * from table order by create_time desc limit 0,10;

When multiple rows share the same create_time , MySQL can return them in any order, causing the same row to appear on multiple pages.

The official MySQL documentation confirms this behavior. To obtain a deterministic order, add a unique column (e.g., id ) to the ORDER BY clause:

select * from table order by create_time desc, id asc limit 0,10;

This guarantees consistent pagination without missing or duplicated rows.

In summary, pagination bugs often stem from:

Assuming static data while the underlying set changes.

Improper use of pagination helpers that leave thread‑local state.

Relying on non‑unique ordering columns.

Solutions include using reliable pagination libraries correctly, adding deterministic ordering columns, and considering backward pagination when data mutates between pages.

That’s all for this article.

backendSQLMySQLpaginationPageHelperbug-fixing
Sanyou's Java Diary
Written by

Sanyou's Java Diary

Passionate about technology, though not great at solving problems; eager to share, never tire of learning!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.