Implementing Distributed Logging with Spring Cloud Sleuth and the ELK Stack
This guide explains how to set up distributed logging for microservices using Spring Cloud Sleuth, Zipkin, and the ELK stack, covering dependency configuration, service registration, Logstash and Kibana setup, logback-spring.xml customization, and log querying to trace inter‑service calls.
In microservice architectures, services are often spread across multiple servers, requiring a distributed logging solution. Spring Cloud Sleuth provides tracing capabilities that, together with Zipkin, allow you to capture service dependencies via logs.
The article uses the ELK stack (Elasticsearch, Logstash, Kibana) as the log collection and visualization platform.
1. Sleuth Setup
Step 1: Sleuth Management Service
Add the following Maven dependencies to a dedicated project:
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-autoconfigure-ui</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>io.zipkin.java</groupId>
<artifactId>zipkin-server</artifactId>
</dependency>Configure the Eureka client address:
eureka:
client:
serviceUrl:
defaultZone: http://localhost:1111/eureka/Add Zipkin and discovery annotations to the Spring Boot main class:
package com.wlf.demo;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.client.discovery.EnableDiscoveryClient;
import zipkin.server.EnableZipkinServer;
@EnableDiscoveryClient
@EnableZipkinServer
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}Running this service and accessing its URL displays the Zipkin UI.
Step 2: Instrumented Microservice
In each microservice that should be traced, add the following dependencies:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-sleuth-zipkin</artifactId>
</dependency>Configure Sleuth and Zipkin in application.yml :
spring:
sleuth:
sampler:
percentage: 1
zipkin:
base-url: http://localhost:9411Start the service registry, gateway, required microservices, and the Sleuth management service, then invoke any microservice. The Zipkin UI will show trace logs and the dependency graph.
2. ELK Stack Setup
Elasticsearch and Kibana are assumed to be installed. Install Logstash and create a configuration file (e.g., logstash.conf ) with the following content:
input {
tcp {
port => 4560
codec => json_lines
}
}
output {
elasticsearch {
hosts => ["192.168.160.66:9200", "192.168.160.88:9200", "192.168.160.166:9200"]
index => "applog"
}
}Start Logstash with bin/logstash -f logstash.conf , then launch Elasticsearch, Kibana, and Logstash.
In Kibana, create an index pattern named applog to visualize the logs.
3. Logback Configuration
Spring Cloud and Logstash support Logback. Create a logback-spring.xml that defines a console appender and a Logstash TCP socket appender. Example snippet:
<configuration scan="true" scanPeriod="10 seconds">
<springProperty scope="context" name="springAppName" source="spring.application.name"/>
<property name="CONSOLE_LOG_PATTERN" value="%date [%thread] %-5level %logger{36} - %msg%n"/>
<appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
<withJansi>true</withJansi>
<encoder>
<pattern>${CONSOLE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
</appender>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>192.168.160.66:4560</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<timestamp>
<timeZone>UTC</timeZone>
</timestamp>
<pattern>
{
"severity":"%level",
"service":"${springAppName:-}",
"trace":"%X{X-B3-TraceId:-}",
"span":"%X{X-B3-SpanId:-}",
"exportable":"%X{X-Span-Export:-}",
"pid":"${PID:-}",
"thread":"%thread",
"class":"%logger{40}",
"rest":"%message"
}
</pattern>
</providers>
</encoder>
</appender>
<root level="info">
<appender-ref ref="stdout"/>
<appender-ref ref="logstash"/>
</root>
</configuration>Note that spring.application.name must be defined in bootstrap.yml because logback-spring.xml loads before application.yml .
4. Querying Logs
After starting all components, invoke a microservice (e.g., myService-provider ). The log appears in the console and is sent to Logstash, then indexed in Elasticsearch.
In Kibana’s Discover view, search the applog index; the log’s rest field contains the message, and the trace and span IDs allow you to follow the full request path across services.
With this setup, distributed logs are centrally collected, searchable, and can be used to monitor microservice interactions in production.
IT Architects Alliance
Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.