Backend Development 11 min read

Implementing Distributed Tracing with Spring Cloud Sleuth, Zipkin and the ELK Stack

This guide explains how to set up distributed tracing for Spring Cloud microservices using Sleuth and Zipkin, integrate logs with the ELK stack, configure Logback for Logstash output, and query traces in Kibana, providing step‑by‑step code and configuration examples.

Top Architect
Top Architect
Top Architect
Implementing Distributed Tracing with Spring Cloud Sleuth, Zipkin and the ELK Stack

In microservice architectures, services are often spread across multiple servers, requiring a distributed logging solution. Spring Cloud provides the Sleuth component to trace service calls via logs, and combined with Zipkin and the ELK stack (Elasticsearch, Logstash, Kibana) you can collect, store, and visualize these logs.

1. Sleuth Management Service

First, create a dedicated Spring Boot project for the Sleuth management server. Add the following Maven dependencies:

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-autoconfigure-ui</artifactId>
    <scope>runtime</scope>
</dependency>

<dependency>
    <groupId>io.zipkin.java</groupId>
    <artifactId>zipkin-server</artifactId>
</dependency>

Configure the service registry address (e.g., Eureka):

eureka:
  client:
    serviceUrl:
      defaultZone: http://localhost:1111/eureka/

Add the necessary annotations to the Spring Boot main class:

package com.wlf.demo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import zipkin.server.EnableZipkinServer;

@EnableDiscoveryClient
@EnableZipkinServer
@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

Running this application and accessing its URL will display the Zipkin UI.

2. Instrumenting the Microservices

For each microservice that needs to be traced, add the Sleuth and Zipkin starter dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>

Configure Sleuth and Zipkin in application.yml :

spring:
  sleuth:
    sampler:
      percentage: 1
  zipkin:
    base-url: http://localhost:9411

The spring.sleuth.sampler.percentage setting controls the proportion of requests that are traced (1 means 100%). Adjust this value for production environments to balance performance and observability.

Start the service registry, gateway, the instrumented microservices, and the Sleuth management service. Invoking any microservice through the gateway will generate trace data visible in the Zipkin UI.

3. Setting Up the ELK Stack

Install Elasticsearch, Logstash and Kibana (ELK). Elasticsearch and Kibana are straightforward; Logstash requires a configuration file, for example:

output {
input {
  tcp {
    port => 4560
    codec => json_lines
  }
}

output {
  elasticsearch {
    hosts => ["192.168.160.66:9200","192.168.160.88:9200","192.168.160.166:9200"]
    index => "applog"
  }
}
}

Start Elasticsearch, Kibana and Logstash (using the -f option to point to the config file). Create the applog index pattern in Kibana to begin visualizing logs.

4. Logback Configuration for Logstash

Spring Cloud and Logstash both support Logback. Provide a logback-spring.xml that defines a console appender, a daily rolling file appender, and a Logstash TCP socket appender. Below is a complete example:

<?xml version="1.0" encoding="UTF-8"?>
<configuration scan="true" scanPeriod="10 seconds">
    <springProperty scope="context" name="springAppName" source="spring.application.name"/>
    <property name="CONSOLE_LOG_PATTERN" value="%date [%thread] %-5level %logger{36} - %msg%n"/>

    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <withJansi>true</withJansi>
        <encoder>
            <pattern>${CONSOLE_LOG_PATTERN}</pattern>
            <charset>utf8</charset>
        </encoder>
    </appender>

    <appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>192.168.160.66:4560</destination>
        <encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
            <providers>
                <timestamp>
                    <timeZone>UTC</timeZone>
                </timestamp>
                <pattern>
                    {"severity":"%level","service":"${springAppName:-}","trace":"%X{X-B3-TraceId:-}","span":"%X{X-B3-SpanId:-}","exportable":"%X{X-Span-Export:-}","pid":"${PID:-}","thread":"%thread","class":"%logger{40}","rest":"%message"}
                </pattern>
            </providers>
        </encoder>
    </appender>

    <appender name="dailyRollingFileAppender" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <File>main.log</File>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <FileNamePattern>main.%d{yyyy-MM-dd}.log</FileNamePattern>
            <maxHistory>30</maxHistory>
        </rollingPolicy>
        <encoder>
            <Pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{35} - %msg %n</Pattern>
        </encoder>
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>DEBUG</level>
        </filter>
    </appender>

    <springProfile name="!production">
        <logger name="com.myfee" level="DEBUG"/>
        <logger name="org.springframework.web" level="INFO"/>
        <root level="info">
            <appender-ref ref="stdout"/>
            <appender-ref ref="dailyRollingFileAppender"/>
            <appender-ref ref="logstash"/>
        </root>
    </springProfile>

    <springProfile name="production">
        <logger name="com.myfee" level="DEBUG"/>
        <logger name="org.springframework.web" level="INFO"/>
        <root level="info">
            <appender-ref ref="stdout"/>
            <appender-ref ref="dailyRollingFileAppender"/>
            <appender-ref ref="logstash"/>
        </root>
    </springProfile>
</configuration>

Note that spring.application.name must be defined in bootstrap.yml because logback-spring.xml is loaded before application.yml .

5. Querying Logs

After starting all components (Eureka, gateway, microservices, Sleuth, Elasticsearch, Logstash, Kibana), invoke a microservice endpoint. The logs will appear in the console, be shipped to Logstash, stored in Elasticsearch, and can be searched in Kibana’s Discover view. The rest field in the JSON payload contains the original log message, while the trace and span fields allow you to follow the full request path across services.

With this setup, you have a complete distributed logging and tracing pipeline for Spring Cloud microservices, enabling real‑time monitoring, debugging, and performance analysis.

MicroservicesDistributed TracingLogbackELKZipkinSpring Cloud Sleuth
Top Architect
Written by

Top Architect

Top Architect focuses on sharing practical architecture knowledge, covering enterprise, system, website, large‑scale distributed, and high‑availability architectures, plus architecture adjustments using internet technologies. We welcome idea‑driven, sharing‑oriented architects to exchange and learn together.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.