Operations 14 min read

Step‑by‑Step Deployment of the ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat) on Ubuntu

This article provides a comprehensive, hands‑on guide to installing and configuring the ELK logging stack—including Elasticsearch, Logstash, Kibana, and Filebeat—using Docker on an Ubuntu VM, covering architecture, command‑line setup, configuration files, troubleshooting tips, and future extension options.

Architect
Architect
Architect
Step‑by‑Step Deployment of the ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat) on Ubuntu

Introduction

The ELK stack (Elasticsearch, Logstash, Kibana, and Filebeat) is a popular solution for collecting, processing, and visualizing logs. This guide documents a complete end‑to‑end deployment on a single Ubuntu virtual machine, with step‑by‑step commands, configuration examples, and common pitfalls.

ELK Architecture Overview

Logs are written by applications (e.g., via Logback) to disk files. Filebeat reads these files and forwards them to Logstash, which parses and enriches the data before indexing it into Elasticsearch. Kibana visualizes the indexed data.

1. Deploy Elasticsearch

Pull the Elasticsearch Docker image and prepare host directories:

docker pull elasticsearch:7.7.1
mkdir -p /data/elk/es/{config,data,logs}
chown -R 1000:1000 /data/elk/es

Create elasticsearch.yml in /data/elk/es/config with the following content:

cluster.name: "my-es"
network.host: 0.0.0.0
http.port: 9200

Run the container:

docker run -it -d -p 9200:9200 -p 9300:9300 --name es \
  -e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
  -e "discovery.type=single-node" \
  --restart=always \
  -v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
  -v /data/elk/es/data:/usr/share/elasticsearch/data \
  -v /data/elk/es/logs:/usr/share/elasticsearch/logs \
  elasticsearch:7.7.1

Verify the service:

curl http://localhost:9200

2. Deploy Kibana

Pull the Kibana image and obtain the Elasticsearch container IP:

docker pull kibana:7.7.1
docker inspect --format '{{ .NetworkSettings.IPAddress }}' es

Create kibana.yml (e.g., in /data/elk/kibana ) with:

#Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://172.17.0.2:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: true

Run Kibana:

docker run -d --restart=always \
  --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 \
  --name kibana -p 5601:5601 \
  -v /data/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml \
  kibana:7.7.1

Access the UI at http:// :5601 and load sample data if prompted.

3. Deploy Logstash

Install Java (required by Logstash):

sudo apt install openjdk-8-jdk

Download and extract Logstash:

curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.7.1.tar.gz
tar -xzvf logstash-7.7.1.tar.gz

Test a simple pipeline:

cd logstash-7.7.1
bin/logstash -e 'input { stdin { } } output { stdout {} }'

Create weblog.conf (e.g., /logstash-7.7.1/streamconf/weblog.conf ) with the following configuration:

input {
  tcp { port => 9900 }
}
filter {
  grok { match => { "message" => "%{COMBINEDAPACHELOG}" } }
  mutate { convert => { "bytes" => "integer" } }
  geoip { source => "clientip" }
  useragent { source => "agent" target => "useragent" }
  date { match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"] }
}
output {
  stdout { }
  elasticsearch { hosts => ["localhost:9200"] }
}

Start Logstash with the config:

bin/logstash -f /logstash-7.7.1/streamconf/weblog.conf

4. Deploy Filebeat

Download and extract Filebeat:

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-linux-x86_64.tar.gz
tar xzvf filebeat-7.7.1-linux-x86_64.tar.gz

Create filebeat_apache.yml (example):

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /home/vagrant/logs/*.log

output.elasticsearch:
  hosts: ["192.168.56.10:9200"]

Start the pipeline (ensure Logstash is running first):

bin/logstash -f weblog.conf
./filebeat -e -c filebeat_apache.yml

Verify indices in Elasticsearch:

curl http://localhost:9200/_cat/indices?v

In Kibana, create an index pattern (e.g., filebeat-* ) to explore the logs.

5. Common Issues & Solutions

Docker pull failures due to out‑of‑memory: clean inodes or increase VM memory.

Kibana startup errors: verify that the Elasticsearch IP address matches the container’s IP.

Future Extensions

Add Kafka between Filebeat and Logstash for higher throughput.

Integrate Grafana for metrics monitoring.

Implement distributed tracing for end‑to‑end visibility.

DockerElasticsearchloggingELKLogstashKibanaFilebeat
Architect
Written by

Architect

Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.