Master Kafka’s Delivery Guarantees: At‑Most, At‑Least, and Exactly‑Once Explained
This article explains Kafka’s three delivery semantics—At most once, At least once, and Exactly once—from both producer and consumer perspectives, details the required configuration settings, and discusses how Kafka ensures idempotence, transaction support, and prevents data loss, duplication, and ordering issues.
Kafka Delivery Semantics
Kafka provides three possible message delivery guarantees: At most once (messages may be lost but are never redelivered), At least once (messages are never lost but may be redelivered), and Exactly once (each message is delivered only once).
Producer Perspective
At most once : the producer sends a message and does not wait for any acknowledgment, so loss is possible but duplication cannot occur.
At least once : the producer waits for an acknowledgment; if the ack is not received it retries, which can cause duplicate messages.
Exactly once : the producer operates idempotently, ensuring that repeated sends result in a single stored record.
Producer Configuration
At least once (default) : no special configuration; Kafka defaults to
acks=1and
retries=2147483647.
At most once : set
acks=0(no broker acknowledgment) and optionally
retries=0. Be aware that
max.in.flight.requests.per.connectionmust be 1 to avoid out‑of‑order delivery.
Exactly once : enable idempotence with
enable.idempotence=trueand set
acks=all; optionally keep
max.in.flight.requests.per.connectionbelow 5.
Kafka’s default at‑least‑once semantics stem from acks=1 and a very high retries value.
How Kafka Achieves Idempotence
Each producer receives a unique PID and a monotonically increasing sequence number for every
<topic, partition>. The broker stores the highest sequence number per PID; messages with a lower or equal sequence are discarded, guaranteeing exactly‑once delivery per partition.
<code>Properties props = new Properties();
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
props.put("acks", "all"); // default when idempotence is true
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer producer = new KafkaProducer(props);
producer.send(new ProducerRecord(topic, "test"));
</code>Consumer Perspective
At most once : the consumer may commit the offset before processing; if processing fails, the message is lost.
At least once : the consumer processes the message first and commits the offset afterwards; failures cause the same message to be re‑processed.
Exactly once : processing and offset commit are atomic, typically achieved with Kafka transactions and
isolation.level=read_committed.
Consumer Configuration
At least once : set
enable.auto.commit=falseand manually call
consumer.commitSync()after successful processing.
At most once : keep
enable.auto.commit=trueand set a very small
auto.commit.interval.msso offsets are committed before processing.
Exactly once : set
isolation.level=read_committedand use transactional producers.
Preventing Data Loss
Loss can occur at the producer, broker, or consumer level. Mitigations include:
Producer retries and callbacks; use
acks=allto require all replicas to acknowledge.
Broker replication factor ≥ 3 and
min.insync.replicas>1to ensure writes are persisted to multiple brokers.
Disable unclean leader election (
unclean.leader.election.enable=false) to avoid electing out‑of‑sync replicas.
Message Deduplication and Transactions
Kafka’s idempotent producer removes duplicate records on the broker side. For cross‑partition or cross‑session guarantees, use transactional producers: enable idempotence, set a
transactional.id, and wrap sends between
beginTransaction()and
commitTransaction(). Consumers must read with
isolation.level=read_committedto see only committed data.
<code>producer.initTransactions();
try {
producer.beginTransaction();
producer.send(record1);
producer.send(record2);
producer.commitTransaction();
} catch (KafkaException e) {
producer.abortTransaction();
}
</code>Message Ordering
Kafka guarantees order only within a partition. Out‑of‑order delivery can happen when retries are performed; setting
max.in.flight.requests.per.connection=1prevents this at the cost of throughput.
Images
Spring Full-Stack Practical Cases
Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.