Backend Development 9 min read

Master Spring Cloud Stream with RabbitMQ: From Basics to Advanced Partitioning

This guide walks through setting up Spring Cloud Stream with RabbitMQ, covering supported binders, core components, configuration, code examples for producers and consumers, consumer groups, and message partitioning to build scalable, event‑driven microservices.

Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
Spring Full-Stack Practical Cases
Master Spring Cloud Stream with RabbitMQ: From Basics to Advanced Partitioning

Introduction

Spring Cloud Stream is a framework for building highly scalable, event‑driven microservices that connect to message queues. It abstracts differences between various MQs so that code does not need to change when swapping the underlying broker.

Supported binders include:

RabbitMQ

Apache Kafka

Kafka Streams

Amazon Kinesis

Google PubSub (partner maintained)

Solace PubSub+ (partner maintained)

Azure Event Hubs (partner maintained)

AWS SQS (partner maintained)

AWS SNS (partner maintained)

Apache RocketMQ (partner maintained)

Each binder has its own GitHub repository documented in the official reference.

The core building blocks of Spring Cloud Stream are:

Destination Binders – components that integrate with the MQ.

Destination Bindings – bridges between the middleware and the application code (producer/consumer).

Message – the data structure used by producers and consumers to communicate via the binder.

Quick Start

Dependencies (pom.xml):

<code>&lt;properties&gt;
  &lt;spring-cloud.version&gt;Hoxton.SR12&lt;/spring-cloud.version&gt;
&lt;/properties&gt;
&lt;dependencies&gt;
  &lt;dependency&gt;
    &lt;groupId&gt;org.springframework.boot&lt;/groupId&gt;
    &lt;artifactId&gt;spring-boot-starter-amqp&lt;/artifactId&gt;
  &lt;/dependency&gt;
  &lt;dependency&gt;
    &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt;
    &lt;artifactId&gt;spring-cloud-starter-stream-rabbit&lt;/artifactId&gt;
  &lt;/dependency&gt;
&lt;/dependencies&gt;
&lt;dependencyManagement&gt;
  &lt;dependencies&gt;
    &lt;dependency&gt;
      &lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt;
      &lt;artifactId&gt;spring-cloud-dependencies&lt;/artifactId&gt;
      &lt;version&gt;${spring-cloud.version}&lt;/version&gt;
      &lt;type&gt;pom&lt;/type&gt;
      &lt;scope&gt;import&lt;/scope&gt;
    &lt;/dependency&gt;
  &lt;/dependencies&gt;
&lt;/dependencyManagement&gt;</code>

Application configuration (application.yml):

<code>spring:
  rabbitmq:
    host: localhost
    virtual-host: bus
    port: 5672
    username: xxx
    password: xxx
---
spring:
  cloud:
    stream:
      bindings:
        myInput:
          destination: demo
        myOutput:
          destination: demo</code>

Define the binding interface:

<code>public interface StreamBinding {
  String INPUT = "myInput";
  String OUTPUT = "myOutput";

  @Input(StreamBinding.INPUT)
  SubscribableChannel input();

  @Output(StreamBinding.OUTPUT)
  MessageChannel output();
}</code>

Consumer implementation:

<code>@Component
@EnableBinding(value = {StreamBinding.class})
public class StreamReceiver {
  private Logger logger = LoggerFactory.getLogger(StreamReceiver.class);

  @StreamListener(StreamBinding.INPUT)
  public void receive(String message) {
    logger.info("Received message: {}", message);
  }
}</code>

Producer endpoint:

<code>@Resource
private StreamBinding streamBinding;

@GetMapping("/send")
public void send() {
  streamBinding.output().send(
    MessageBuilder.withPayload("First Message...").build()
  );
}</code>

After starting the services, RabbitMQ automatically creates a queue whose name is derived from the configuration; the queue is auto‑deleted when the service stops unless made durable.

When multiple instances run, each receives the same message, which can cause duplicate processing. To avoid this, configure a consumer group:

<code>spring:
  cloud:
    stream:
      bindings:
        myInput:
          destination: demo
          group: g_test
        myOutput:
          destination: demo</code>

With a group, all instances share a single queue and messages are load‑balanced (round‑robin) among them.

For scenarios requiring that messages with the same characteristic be processed by the same instance, enable partitioning:

<code>spring:
  cloud:
    stream:
      bindings:
        myInput:
          destination: demo
          group: g_test
          consumer:
            partitioned: true
        myOutput:
          destination: demo
          producer:
            partitionKeyExpression: '1'
            partitionCount: 2
      instanceCount: 2
      instanceIndex: 0</code>

Spring Cloud Stream calculates a partition key, hashes it, and adds a header scst_partition with the partition number. The broker’s exchange and routing key are then derived from this header, ensuring that all messages belonging to the same partition are delivered to the same instance.

After configuring partitioning and running multiple instances, only one instance receives each partitioned message, as demonstrated by the test screenshots.

Finally, the exchange is bound to each service with distinct routing keys, allowing targeted message delivery based on the computed partition.

All steps above complete a functional Spring Cloud Stream setup with RabbitMQ, consumer groups, and message partitioning.

JavamicroservicesMessage QueuesRabbitMQSpring Cloud Stream
Spring Full-Stack Practical Cases
Written by

Spring Full-Stack Practical Cases

Full-stack Java development with Vue 2/3 front-end suite; hands-on examples and source code analysis for Spring, Spring Boot 2/3, and Spring Cloud.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.