Step‑by‑Step Deployment of the ELK Stack (Elasticsearch, Logstash, Kibana, Filebeat) on Ubuntu
This article provides a comprehensive, hands‑on guide to installing and configuring the ELK logging stack—including Elasticsearch, Logstash, Kibana, and Filebeat—using Docker on an Ubuntu VM, covering architecture, command‑line setup, configuration files, troubleshooting tips, and future extension options.
Introduction
The ELK stack (Elasticsearch, Logstash, Kibana, and Filebeat) is a popular solution for collecting, processing, and visualizing logs. This guide documents a complete end‑to‑end deployment on a single Ubuntu virtual machine, with step‑by‑step commands, configuration examples, and common pitfalls.
ELK Architecture Overview
Logs are written by applications (e.g., via Logback) to disk files. Filebeat reads these files and forwards them to Logstash, which parses and enriches the data before indexing it into Elasticsearch. Kibana visualizes the indexed data.
1. Deploy Elasticsearch
Pull the Elasticsearch Docker image and prepare host directories:
docker pull elasticsearch:7.7.1 mkdir -p /data/elk/es/{config,data,logs} chown -R 1000:1000 /data/elk/esCreate elasticsearch.yml in /data/elk/es/config with the following content:
cluster.name: "my-es"
network.host: 0.0.0.0
http.port: 9200Run the container:
docker run -it -d -p 9200:9200 -p 9300:9300 --name es \
-e ES_JAVA_OPTS="-Xms1g -Xmx1g" \
-e "discovery.type=single-node" \
--restart=always \
-v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /data/elk/es/data:/usr/share/elasticsearch/data \
-v /data/elk/es/logs:/usr/share/elasticsearch/logs \
elasticsearch:7.7.1Verify the service:
curl http://localhost:92002. Deploy Kibana
Pull the Kibana image and obtain the Elasticsearch container IP:
docker pull kibana:7.7.1 docker inspect --format '{{ .NetworkSettings.IPAddress }}' esCreate kibana.yml (e.g., in /data/elk/kibana ) with:
#Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: ["http://172.17.0.2:9200"]
xpack.monitoring.ui.container.elasticsearch.enabled: trueRun Kibana:
docker run -d --restart=always \
--log-driver json-file --log-opt max-size=100m --log-opt max-file=2 \
--name kibana -p 5601:5601 \
-v /data/elk/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml \
kibana:7.7.1Access the UI at http:// :5601 and load sample data if prompted.
3. Deploy Logstash
Install Java (required by Logstash):
sudo apt install openjdk-8-jdkDownload and extract Logstash:
curl -L -O https://artifacts.elastic.co/downloads/logstash/logstash-7.7.1.tar.gz
tar -xzvf logstash-7.7.1.tar.gzTest a simple pipeline:
cd logstash-7.7.1
bin/logstash -e 'input { stdin { } } output { stdout {} }'Create weblog.conf (e.g., /logstash-7.7.1/streamconf/weblog.conf ) with the following configuration:
input {
tcp { port => 9900 }
}
filter {
grok { match => { "message" => "%{COMBINEDAPACHELOG}" } }
mutate { convert => { "bytes" => "integer" } }
geoip { source => "clientip" }
useragent { source => "agent" target => "useragent" }
date { match => ["timestamp", "dd/MMM/yyyy:HH:mm:ss Z"] }
}
output {
stdout { }
elasticsearch { hosts => ["localhost:9200"] }
}Start Logstash with the config:
bin/logstash -f /logstash-7.7.1/streamconf/weblog.conf4. Deploy Filebeat
Download and extract Filebeat:
curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.7.1-linux-x86_64.tar.gz
tar xzvf filebeat-7.7.1-linux-x86_64.tar.gzCreate filebeat_apache.yml (example):
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/vagrant/logs/*.log
output.elasticsearch:
hosts: ["192.168.56.10:9200"]Start the pipeline (ensure Logstash is running first):
bin/logstash -f weblog.conf ./filebeat -e -c filebeat_apache.ymlVerify indices in Elasticsearch:
curl http://localhost:9200/_cat/indices?vIn Kibana, create an index pattern (e.g., filebeat-* ) to explore the logs.
5. Common Issues & Solutions
Docker pull failures due to out‑of‑memory: clean inodes or increase VM memory.
Kibana startup errors: verify that the Elasticsearch IP address matches the container’s IP.
Future Extensions
Add Kafka between Filebeat and Logstash for higher throughput.
Integrate Grafana for metrics monitoring.
Implement distributed tracing for end‑to‑end visibility.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.