Fundamentals 13 min read

Comparison of Mainstream Message Queue Products and Their Typical Use Cases

This article examines the core features, performance characteristics, and typical application scenarios of popular message queue middleware such as ZeroMQ, RabbitMQ, ActiveMQ, Redis, Kafka, and others, while also discussing when to adopt a message queue, its benefits like decoupling, eventual consistency, broadcasting, and flow control, and summarizing best‑practice guidelines.

Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Mike Chen's Internet Architecture
Comparison of Mainstream Message Queue Products and Their Typical Use Cases

Message queues have gradually become the core means of internal communication in enterprise IT systems, offering low coupling, reliable delivery, broadcasting, flow control, and eventual consistency, making them a primary method for asynchronous RPC.

There are many mainstream message middleware on the market today, such as the veteran ActiveMQ and RabbitMQ, the hot Kafka, and Alibaba’s self‑developed Notify, MetaQ, RocketMQ, etc. This article mainly explores the comparison of mainstream MQs, their features, and typical usage scenarios.

01 – Current Mainstream MQ Products

1. ZeroMQ

Claimed to be the fastest message queue system, especially suited for high‑throughput scenarios. It is highly extensible and flexible, implemented in C as a socket library wrapper, requiring substantial development effort to use as a queue. ZeroMQ provides only non‑persistent queues, so data is lost if the system goes down. Twitter’s Storm uses ZeroMQ for data flow transmission.

2. RabbitMQ

Leverages Erlang’s concurrency strengths and supports many protocols (AMQP, XMPP, SMTP, STOMP), making it heavyweight and suitable for enterprise development. Performance is good, but it is less friendly for secondary development and maintenance.

3. ActiveMQ

An older open‑source project under Apache, widely used, implements the JMS 1.1 specification, integrates easily with Spring‑JMS, supports multiple protocols, but is not lightweight and has limited support for a large number of queues.

4. Redis

As an in‑memory key‑value store, Redis offers a publish/subscribe service that can be used as an MQ, though use cases are few and it is not easy to scale. Benchmarks show enqueue/dequeue operations for RabbitMQ and Redis with 100 000 iterations, recording time every 10 000 operations.

Experimental results indicate that for small payloads Redis outperforms RabbitMQ on enqueue, but for payloads larger than 10 KB Redis becomes unbearably slow; on dequeue, Redis consistently shows excellent performance, while RabbitMQ’s dequeue performance is far lower.

5. Kafka / Jafka

Kafka, an Apache sub‑project, is a high‑performance, cross‑language distributed publish/subscribe system; Jafka is an upgraded version derived from Kafka.

Key features of Kafka include:

Fast persistence with O(1) overhead.

High throughput (up to 100 k messages/s on a single server) and native distributed architecture with automatic load balancing.

Support for Hadoop parallel data loading, suitable for real‑time processing of log data and offline analysis.

Compared with ActiveMQ, Kafka is lightweight, high‑performance, and works well as a distributed system.

When to Use a Message Queue

Before adopting a message queue, consider whether it is truly necessary.

Common scenarios include:

Business decoupling.

Eventual consistency.

Broadcasting.

Peak‑shaving and flow control.

If strong consistency and immediate result handling are required, RPC may be more appropriate.

02 – Message Queue Usage Scenarios

1. Decoupling

Decoupling is the fundamental problem a message queue solves. It allows a transaction to focus on core logic while non‑essential tasks (e.g., sending SMS after an order is paid) are handled asynchronously, preventing slow downstream services from delaying the main flow.

2. Eventual Consistency

Eventual consistency means two systems end up in the same state—either both succeed or both fail. Some MQs (e.g., Alibaba’s Notify, QMQ) are designed for high‑reliability notifications in transaction systems.

Achieving eventual consistency typically involves recording actions and compensating later, using retry mechanisms until success, rather than relying on heavyweight distributed transactions.

3. Broadcasting

Message queues enable broadcasting: producers publish once, and any number of consumers can subscribe, reducing the need for multiple point‑to‑point integrations.

4. Peak‑shaving and Flow Control

When upstream and downstream processing capacities differ (e.g., front‑end handling millions of requests vs. a database handling only tens of thousands), a message queue acts as a funnel, buffering messages until the downstream can process them, simplifying system design.

03 – Message Queue Usage Summary

1. Message queues are not a panacea; for latency‑sensitive operations requiring strong transactional guarantees, RPC is preferable.

2. For non‑critical or low‑priority tasks, a message queue can offload work.

3. Queues that support eventual consistency can handle distributed‑transaction‑like scenarios more efficiently than heavyweight distributed transactions.

4. When upstream and downstream capacities differ, use a queue as a funnel to smooth traffic.

5. If many downstream systems need to be notified, adopt a message queue decisively.

Above is the explanation of message queues, followed by "Alibaba P8 Architect’s 18‑Lesson Architecture Design".

How to Get It

Follow and forward, then reply with the keyword 【Architecture】 to learn!

-end-

distributed systemsMessage Queueeventual consistencyAsynchronous CommunicationMQ Comparison
Mike Chen's Internet Architecture
Written by

Mike Chen's Internet Architecture

Over ten years of BAT architecture experience, shared generously!

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.