RocketMQ 5.0 Architecture: Proxy Layer, POP Consumption, and Controller Mode
This article explains the architectural changes introduced in RocketMQ 5.0, including the new stateless Proxy layer, the POP consumption model, message‑level load balancing, and the Controller mode for automatic master‑slave failover, providing detailed diagrams and resource links.
The article collects technical resources on RocketMQ and Kafka for personal reference, offering download links and passwords for the shared materials.
To improve cloud‑native resource utilization and elasticity, RocketMQ 5.0 undergoes a major architectural adjustment.
Proxy Layer
RocketMQ5.0 previous architecture:
RocketMQ5.0 originally used a custom Remoting protocol built on Netty for network communication; computation and storage were combined inside the Broker . Producers and consumers retrieve routing information from the NameServer and then interact directly with the Broker for message production and consumption.
RocketMQ5.0 new architecture:
Version 5.0 introduces a stateless Proxy layer that extracts protocol adaptation, permission management, and consumption management from the Broker . The Broker now focuses solely on data storage, while the Proxy handles client‑side logic, enabling better adaptation to cloud‑native environments and resource‑elastic scheduling. The Proxy layer also adds support for the GRPC protocol, which relies on Google’s high‑performance RPC framework and Protobuf serialization.
POP Consumption Mode
Before 5.0, RocketMQ supported two consumption methods from the Broker : Pull and Push .
Pull mode: Consumers continuously poll the Broker for messages; if none are available, they block until new messages arrive. Push mode: Consumers register a listener; the Broker pushes messages to the listener via a callback, although the underlying mechanism still involves the consumer pulling data from the Broker .
In 5.0, RocketMQ adds a POP mode:
The client sends a Pop request to the Broker , which retrieves a message in POP style and returns it. After successful consumption, the client sends an ACK to confirm.
Message‑level load balancing is also introduced, allowing multiple consumers within the same consumer group to evenly share messages from a single queue.
Controller Mode
Prior to 5.0, RocketMQ offered Master‑Slave and DLedger deployment modes. Version 5.0 adds a Controller mode that provides automatic master failover without requiring DLedger.
The Controller extracts master‑selection logic from the Broker and can be deployed independently or embedded in the NameServer . It leverages RocketMQ’s native storage replication to achieve automatic switching.
Deployment diagrams:
References:
https://rocketmq.apache.org/version/
https://developer.aliyun.com/article/801815
https://rocketmq.apache.org/zh/docs/deploymentOperations/03autofailover/
Rare Earth Juejin Tech Community
Juejin, a tech community that helps developers grow.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.