Seven Classic Use Cases of Message Queues
This article shares seven practical scenarios—such as asynchronous processing, traffic smoothing, message bus, delayed tasks, broadcast consumption, distributed transactions, and data hub integration—where message queues like RocketMQ, Kafka, ActiveMQ, and RabbitMQ help solve high‑concurrency challenges in modern backend systems.
1 Asynchronous & Decoupling
The author recounts a user‑registration service that previously performed SMS sending synchronously, causing latency and tight coupling. By introducing a message queue, the registration flow returns immediately while a separate consumer handles SMS delivery, achieving asynchronous response and clear separation of concerns.
2 Traffic Smoothing (Peak‑Shaving)
In high‑traffic situations, sudden request spikes can overload databases and CPU/IO. Using a queue, producers publish requests at a controlled rate and consumers process them within a bounded concurrency, preventing database overload and stabilizing the front‑end experience.
3 Message Bus
Similar to a hardware data bus, a message bus enables multiple subsystems to exchange information without direct calls. The author describes a scheduling center that maintains order state and communicates with downstream services (e.g., ticketing, prize calculation) via a queue, reducing inter‑service coupling.
4 Delayed Tasks
For order cancellation after a payment timeout, the service publishes a delayed message. When the delay expires, a consumer checks the order status and cancels unpaid orders. The article includes a RocketMQ 4.x delayed‑message example:
Message msg = new Message();
msg.setTopic("TopicA");
msg.setTags("Tag");
msg.setBody("this is a delay message".getBytes());
// set delay level 5 (1 minute)
msg.setDelayTimeLevel(5);
producer.send(msg);RocketMQ 4.x supports 18 predefined delay levels; RocketMQ 5.x allows arbitrary timestamps via three dedicated APIs.
5 Broadcast Consumption
Broadcast consumption delivers each message to every consumer in a cluster, ensuring at least one consumption per consumer. Typical scenarios include message push (e.g., driver‑side order dispatch) and cache synchronization across distributed nodes.
5.1 Message Push
A TCP‑based push service acts as both a consumer and a broadcaster, receiving order‑dispatch messages from a producer and pushing them to driver apps via long‑lived connections.
5.2 Cache Synchronization
Applications load dictionary data into local caches (HashMap, Guava, Caffeine). When the dictionary changes, a broadcast message triggers each node to refresh its cache, keeping data consistent across the cluster.
6 Distributed Transactions
Using an e‑commerce order as an example, the author compares three approaches: traditional XA transactions (high overhead), plain message‑driven eventual consistency (risk of inconsistency), and RocketMQ’s transactional messages that provide a two‑phase commit, guaranteeing global consistency even under failures.
The transactional flow includes: producer sends a “half‑message” (acknowledged but not deliverable), executes local transaction, then commits or rolls back the message; the broker delivers or discards the message accordingly, with retry and message‑check mechanisms for network partitions.
7 Data Hub (Log/Stream Integration)
Specialized systems (HBase, Elasticsearch, Spark, OpenTSDB) often need the same data. By using Kafka as a central hub, logs are collected by clients, persisted in Kafka, and then consumed by downstream processors (Logstash, Hadoop, etc.), enabling efficient multi‑system ingestion without building separate pipelines.
If you found this article helpful, please like, view, or share it to encourage more high‑quality content.
Wukong Talks Architecture
Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.