Ensuring Reliable Message Delivery and Idempotence in RabbitMQ and Kafka
This article explains common scenarios that cause message loss or non‑idempotent processing in RabbitMQ and Kafka, and presents practical solutions such as persistent delivery, confirm mechanisms, delayed delivery, and unique‑ID plus fingerprint strategies to achieve reliable and idempotent message transmission.
Possible Message Loss Situations
During the Producer's send to the Broker, network issues may cause the message to be lost or the Broker may fail to store it.
The Broker temporarily stores the message in memory; if the Broker crashes before the Consumer processes it, the message is lost.
The Consumer receives the message but encounters an internal error before processing; the Broker assumes the message was handled and proceeds with subsequent messages.
How the Producer Guarantees Reliable Delivery
Ensure the message is successfully sent.
Ensure the MQ node (Broker) successfully receives it.
Receive an acknowledgment from the Broker.
Implement a robust compensation mechanism for retries.
Solution: Message Persistence
Message Persistence to Disk
Configure the queue to be durable (metadata persisted) and set the message's deliveryMode to 2 so the message itself is persisted to disk; the Broker only acknowledges after the message is safely stored.
With these settings, if the Broker crashes, the Producer will not receive an ack and can resend the message.
Delayed Message Delivery
Use a second delayed‑confirmation message and a callback service to verify the original message before committing, reducing the number of database writes in high‑concurrency scenarios.
The upstream service stores business data, sends a message to the Broker.
A delayed confirmation message is sent.
The downstream service consumes the original message.
A new confirmation message (not a Broker confirm) is sent.
The callback service listens for this confirmation and records the message in the database.
If the callback finds no corresponding record, it triggers a retry via RPC to the upstream system.
Non‑Idempotent Scenarios in RabbitMQ
Network glitches cause the Consumer's ack to be lost; the Producer retries, leading to duplicate processing.
Network jitter during message transmission between Broker and Consumer.
Consumer failures or exceptions.
Non‑Idempotent Scenarios in Kafka
If a Consumer restarts before committing its offset, the same messages may be consumed again, causing duplicates.
Solution: Unique ID + Fingerprint
Generate a globally unique identifier (e.g., primary key of the business table) and a fingerprint (e.g., timestamp + business code) for each operation. Use the unique ID for deduplication in the database and route messages accordingly, achieving idempotence across multiple databases.
A unified ID generation service provides IDs to upstream services, which then send messages to the Broker.
An ID‑routing component listens to messages, attempts to insert them; if insertion succeeds (no duplicate), the message proceeds downstream; otherwise it is dropped.
Code Ape Tech Column
Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.