Backend Development 28 min read

Understanding RocketMQ Consumption Logic in Version 4.9.x

This article provides a comprehensive walkthrough of RocketMQ 4.9.x consumption architecture, covering the four core roles, publish‑subscribe model, storage structures, load‑balancing, long‑polling, concurrent and ordered consumption flows, progress persistence, and retry mechanisms, with illustrative diagrams and code snippets.

Sohu Tech Products
Sohu Tech Products
Sohu Tech Products
Understanding RocketMQ Consumption Logic in Version 4.9.x

1 Architecture Overview

RocketMQ 4.9.x is built around four roles: NameServer (a stateless routing registry), BrokerServer (stores, delivers and queries messages), Producer (publishes messages using load‑balancing), and Consumer (pull or push consumption).

The cluster workflow starts with NameServer startup, Broker registration, Topic creation, Producer sending messages, and Consumer pulling messages.

2 Publish‑Subscribe Model

RocketMQ uses a publish‑subscribe model with two consumption modes: Clustering (each message is consumed by only one consumer in a group) and Broadcasting (every consumer receives every message).

3 Storage Model

RocketMQ stores messages in a hybrid structure. All queues of a broker share a single commitlog file where messages are sequentially written. Each commitlog file is 1 GB by default and named by its start offset (e.g., 00000000000000000000).

Background threads asynchronously build consumequeue (consumption files) and indexfile (index files). Each consumequeue stores entries of 20 bytes, enabling fast offset‑based lookup.

The progress of each consumer group is recorded in consumerOffset.json , mapping topic@group to the logical offset of each queue.

4 Consumption Process

In cluster consumption, the flow is:

Producer sends a message to a Broker.

Broker persists it to commitlog and the async thread builds consumequeue .

Consumer starts, obtains the queue list from NameServer, and pulls messages based on its stored offset.

Broker returns the messages; the client stores them in a processQueue (a red‑black tree).

The consumer thread executes the user‑defined listener, acknowledges success, and updates the local offset.

4.1 Load Balancing

Clients perform load balancing when they start, every 20 seconds, or when a broker notifies a change. The broker provides the list of queues for a topic and the list of consumer IDs in the group. An average‑allocation algorithm (similar to pagination) assigns queues to consumers.

During balancing, the client creates pullRequest objects for newly assigned queues and stores them in pullRequestQueue . The processQueue holds the messages to be consumed.

4.2 Long Polling

Consumers use long polling to avoid excessive pull requests. If no new messages are available, the broker places the request into pullRequestTable . A background service checks every 5 seconds; when new messages arrive, it wakes the request.

5 Concurrent Consumption

The concurrent consumer creates a thread pool, a task to clean expired messages, and a task to handle failed messages. Pull‑ed messages are batched (e.g., 10 messages) into ConsumeRequest objects and submitted to the thread pool.

Each thread invokes the user listener; on success the offset is updated, on failure the message is sent to a retry queue.

6 Ordered Consumption

Ordered messages guarantee FIFO per queue (partition‑ordered) or across the whole topic (global‑ordered). The producer selects a queue using a MessageQueueSelector based on a sharding key.

Message msg = new Message();
msg.setTopic("TopicA");
msg.setTags("Tag");
msg.setBody("this is a delay message".getBytes());
// set delay level 5 (1 minute)
msg.setDelayTimeLevel(5);
producer.send(msg);

Consumers for ordered messages acquire a lock on the queue before pulling, ensuring a single thread processes each queue.

7 Progress Persistence

In cluster mode, progress is reported to the broker every 5 seconds via ConsumerManager and persisted to consumerOffset.json . In broadcast mode, progress is stored locally in offsets.json (format MessageQueue:Offset ).

8 Retry Mechanism

When consumption fails, the client sends a CONSUMER_SEND_MSG_BACK request. The broker places the message into a retry topic named %RETRY%<group> with an increasing delay level (first retry uses level 3, i.e., 10 seconds). After 16 retries the message is moved to a dead‑letter queue %DLQ%<group> .

Retry #

Interval

1

10 s

2

30 s

3

1 min

4

2 min

5

3 min

6

4 min

7

5 min

8

6 min

9

7 min

10

8 min

11

9 min

12

10 min

13

20 min

14

30 min

15

1 h

16

2 h

9 Summary

RocketMQ 4.x consumption involves heavy client‑side logic: load balancing, pull requests, long polling, offset management, and retry handling. This makes multi‑language client development complex and emphasizes the need for idempotent business logic because consumer restarts or broker failures can cause duplicate deliveries.

References:

RocketMQ 4.9.4 GitHub documentation

RocketMQ Technical Insider

Message Queue Core Knowledge

Message ACK and Offset Management

distributed systemsLoad BalancingRetryMessage QueuerocketmqConsumerlong polling
Sohu Tech Products
Written by

Sohu Tech Products

A knowledge-sharing platform for Sohu's technology products. As a leading Chinese internet brand with media, video, search, and gaming services and over 700 million users, Sohu continuously drives tech innovation and practice. We’ll share practical insights and tech news here.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.