Big Data 23 min read

Master Kafka’s Delivery Guarantees: At‑Most, At‑Least, and Exactly‑Once Explained

This article explains Kafka’s three delivery semantics—At most once, At least once, and Exactly once—from both producer and consumer perspectives, details the required configuration settings, and discusses how Kafka ensures idempotence, transaction support, and prevents data loss, duplication, and ordering issues.

Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
Master Kafka’s Delivery Guarantees: At‑Most, At‑Least, and Exactly‑Once Explained

Kafka Delivery Semantics

Kafka provides three possible message delivery guarantees: At most once (messages may be lost but are never redelivered), At least once (messages are never lost but may be redelivered), and Exactly once (each message is delivered only once).

Producer Perspective

At most once : the producer sends a message and does not wait for any acknowledgment, so loss is possible but duplication cannot occur.

At least once : the producer waits for an acknowledgment; if the ack is not received it retries, which can cause duplicate messages.

Exactly once : the producer operates idempotently, ensuring that repeated sends result in a single stored record.

Producer Configuration

At least once (default) : no special configuration; Kafka defaults to

acks=1

and

retries=2147483647

.

At most once : set

acks=0

(no broker acknowledgment) and optionally

retries=0

. Be aware that

max.in.flight.requests.per.connection

must be 1 to avoid out‑of‑order delivery.

Exactly once : enable idempotence with

enable.idempotence=true

and set

acks=all

; optionally keep

max.in.flight.requests.per.connection

below 5.

Kafka’s default at‑least‑once semantics stem from acks=1 and a very high retries value.

How Kafka Achieves Idempotence

Each producer receives a unique PID and a monotonically increasing sequence number for every

<topic, partition>

. The broker stores the highest sequence number per PID; messages with a lower or equal sequence are discarded, guaranteeing exactly‑once delivery per partition.

<code>Properties props = new Properties();
props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "true");
props.put("acks", "all"); // default when idempotence is true
props.put("bootstrap.servers", "localhost:9092");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer producer = new KafkaProducer(props);
producer.send(new ProducerRecord(topic, "test"));
</code>

Consumer Perspective

At most once : the consumer may commit the offset before processing; if processing fails, the message is lost.

At least once : the consumer processes the message first and commits the offset afterwards; failures cause the same message to be re‑processed.

Exactly once : processing and offset commit are atomic, typically achieved with Kafka transactions and

isolation.level=read_committed

.

Consumer Configuration

At least once : set

enable.auto.commit=false

and manually call

consumer.commitSync()

after successful processing.

At most once : keep

enable.auto.commit=true

and set a very small

auto.commit.interval.ms

so offsets are committed before processing.

Exactly once : set

isolation.level=read_committed

and use transactional producers.

Preventing Data Loss

Loss can occur at the producer, broker, or consumer level. Mitigations include:

Producer retries and callbacks; use

acks=all

to require all replicas to acknowledge.

Broker replication factor ≥ 3 and

min.insync.replicas&gt;1

to ensure writes are persisted to multiple brokers.

Disable unclean leader election (

unclean.leader.election.enable=false

) to avoid electing out‑of‑sync replicas.

Message Deduplication and Transactions

Kafka’s idempotent producer removes duplicate records on the broker side. For cross‑partition or cross‑session guarantees, use transactional producers: enable idempotence, set a

transactional.id

, and wrap sends between

beginTransaction()

and

commitTransaction()

. Consumers must read with

isolation.level=read_committed

to see only committed data.

<code>producer.initTransactions();
try {
    producer.beginTransaction();
    producer.send(record1);
    producer.send(record2);
    producer.commitTransaction();
} catch (KafkaException e) {
    producer.abortTransaction();
}
</code>

Message Ordering

Kafka guarantees order only within a partition. Out‑of‑order delivery can happen when retries are performed; setting

max.in.flight.requests.per.connection=1

prevents this at the cost of throughput.

Images

KafkaidempotenceConsumerproducerexactly-oncedelivery guarantees
Spring Full-Stack Practical Cases
Written by

Spring Full-Stack Practical Cases

Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.