Secure Production ELK Stack with Kafka: Step‑by‑Step Deployment Guide
This guide walks through building a secure, production‑grade logging pipeline by deploying an ELK stack (Elasticsearch, Logstash, Kibana) with X‑Pack security, a Kafka message queue with SASL authentication, and Filebeat agents, covering environment preparation, certificate generation, configuration files, and startup scripts.
Architecture Overview
The architecture consists of Filebeat agents sending logs to a Kafka cluster, Logstash consuming from Kafka, processing the data, and forwarding it to an Elasticsearch cluster; Kibana provides a web UI for searching, aggregating, and visualizing the indexed data.
Environment Preparation
Disable SELinux, stop firewalld, turn off swap, adjust system limits, set JVM parameters, create a non‑root user, and prepare storage directories. Ensure password‑less SSH between servers.
<code>#!/bin/bash
# Update /etc/hosts
cat >> /etc/hosts <<EOF
192.168.100.83 es83
192.168.100.86 es86
192.168.100.87 es87
EOF
# Stop firewalld
systemctl stop firewalld
systemctl disable firewalld
# Disable SELinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# Disable swap
swapoff -a
# Increase file limits
cat > /etc/security/limits.conf <<EOF
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 65536
* soft memlock unlimited
* hard memlock unlimited
* soft nofile 65536
EOF
# Increase vm.max_map_count
cat >> /etc/sysctl.conf <<EOF
vm.max_map_count=562144
EOF
sysctl -p
# Create elkuser
useradd elkuser
echo 123456 | passwd --stdin elkuser
# Generate SSH keys and copy to peers
ssh-keygen -q -N "" -f ~/.ssh/id_rsa
ssh-copy-id 192.168.100.83
ssh-copy-id 192.168.100.86
ssh-copy-id 192.168.100.87
</code>Elasticsearch Cluster Deployment
Elasticsearch 7.2.0 is installed on three nodes (es83, es86, es87) with 30 GB heap, G1 GC, and X‑Pack security enabled. Certificates are generated with
elasticsearch-certutiland placed in the
configdirectory. The
elasticsearch.ymlfile is configured with cluster name, node roles, data paths, network host, and security settings.
<code># Download and extract
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.2.0-linux-x86_64.tar.gz
tar -xvf elasticsearch-7.2.0-linux-x86_64.tar.gz
# Adjust JVM heap
sed -i -e 's/1g/30g/g' -e '36,38s/^-/#&/' ./config/jvm.options
# Generate CA and node certificates
./bin/elasticsearch-certutil ca
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.83
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.86
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 --ip 192.168.100.87
# Move certificates
cp *.p12 ./config/
# Create elasticsearch.yml
cat > ./config/elasticsearch.yml <<EOF
cluster.name: chilu_elk
node.name: es83
node.master: true
node.data: true
path.data: /logdata/data1,/logdata/data2,/logdata/data3,/logdata/data4,/logdata/data5,/logdata/data6
bootstrap.memory_lock: true
network.host: 192.168.100.83
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
cluster.initial_master_nodes: ["192.168.100.83:9300","192.168.100.86:9300","192.168.100.87:9300"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: es83.p12
xpack.security.transport.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: es83.p12
xpack.security.http.ssl.truststore.path: elastic-stack-ca.p12
xpack.security.http.ssl.client_authentication: optional
EOF
# Distribute to other nodes and adjust hostnames
scp -r ./elasticsearch-7.2.0 192.168.100.86:/home/elkuser/
scp -r ./elasticsearch-7.2.0 192.168.100.87:/home/elkuser/
ssh 192.168.100.86 "sed -i -e 's/es83/es86/g' -e '8s/192.168.100.83/192.168.100.86/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"
ssh 192.168.100.87 "sed -i -e 's/es83/es87/g' -e '8s/192.168.100.83/192.168.100.87/' /home/elkuser/elasticsearch-7.2.0/config/elasticsearch.yml"
# Set ownership
chown -R elkuser:elkuser /logdata ./elasticsearch-7.2.0
ssh 192.168.100.86 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"
ssh 192.168.100.87 "chown -R elkuser:elkuser /logdata /home/elkuser/elasticsearch-7.2.0"
# Start Elasticsearch as non‑root
su elkuser -c "./bin/elasticsearch -d"
ssh [email protected] "./elasticsearch-7.2.0/bin/elasticsearch -d"
ssh [email protected] "./elasticsearch-7.2.0/bin/elasticsearch -d"
# Auto‑generate passwords
./bin/elasticsearch-setup-passwords auto | tee elk_pwd.log
# Verify cluster health
curl --tlsv1 -XGET "https://192.168.100.83:9200/_cluster/health?pretty" --user elastic:$(grep '^PASSWORD elastic' elk_pwd.log | awk '{print $3}') -k
</code>Kafka Cluster Deployment
Kafka 2.3.0 (Scala 2.12) is installed with built‑in Zookeeper. SASL/PLAIN authentication is configured, and ACLs are set to restrict access to the
elktopic.
<code># Download and extract
wget https://archive.apache.org/dist/kafka/2.3.0/kafka_2.12-2.3.0.tgz
tar -xvf kafka_2.12-2.3.0.tgz
# Zookeeper configuration
cat > ./config/zookeeper.properties <<EOF
dataDir=/opt/zookeeper
clientPort=2181
maxClientCnxns=0
tickTime=2000
initLimit=10
syncLimit=5
server.1=192.168.100.83:2888:3888
server.2=192.168.100.86:2888:3888
server.3=192.168.100.87:2888:3888
EOF
mkdir /opt/zookeeper
echo 1 > /opt/zookeeper/myid
# Kafka configuration with SASL
cat > ./config/server.properties <<EOF
broker.id=83
listeners=SASL_PLAINTEXT://192.168.100.83:9092
advertised.listeners=SASL_PLAINTEXT://192.168.100.83:9092
num.network.threads=5
num.io.threads=8
socket.send.buffer.bytes=1024000
socket.receive.buffer.bytes=1024000
socket.request.max.bytes=1048576000
log.dirs=/logdata/kfkdata1,/logdata/kfkdata2,/logdata/kfkdata3,/logdata/kfkdata4,/logdata/kfkdata5
num.partitions=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
log.retention.hours=72
log.cleaner.enable=true
log.cleanup.policy=delete
log.segment.bytes=1073741824
zookeeper.connect=192.168.100.83:2181,192.168.100.86:2181,192.168.100.87:2181
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:admin;User:kafka
EOF
# JAAS files for SASL authentication
cat > ./config/zk_server_jaas.conf <<EOF
Server {
org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="chilu@rljie" user_kafka="chilu@rljie" user_producer="chilu@rljie";
};
EOF
cat > ./config/kafka_server_jaas.conf <<EOF
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="chilu@rljie" user_admin="chilu@rljie" user_producer="chilu@rljie" user_consumer="chilu@rljie";
};
EOF
cat > ./config/kafka_client_jaas.conf <<EOF
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="chilu@rljie";
};
EOF
# Add SASL options to startup scripts
sed -i -e 's/512M/4G/' -e 's#Xms4G#Xms4G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/zk_server_jaas.conf#' ./bin/zookeeper-server-start.sh
sed -i -e 's/1G/31G/' -e 's#Xms31G#Xms31G -Djava.security.auth.login.config=/opt/kafka_2.12-2.3.0/config/kafka_server_jaas.conf#' ./bin/kafka-server-start.sh
# Distribute to other nodes
scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.86:/opt/
scp -r ./zookeeper ./kafka_2.12-2.3.0 192.168.100.87:/opt/
ssh 192.168.100.86 "echo 2 > /opt/zookeeper/myid && sed -i '1,3s/83/86/' /opt/kafka_2.12-2.3.0/config/server.properties"
ssh 192.168.100.87 "echo 3 > /opt/zookeeper/myid && sed -i '1,3s/83/87/' /opt/kafka_2.12-2.3.0/config/server.properties"
# Start Zookeeper and Kafka daemons
./bin/zookeeper-server-start.sh -daemon ./config/zookeeper.properties
ssh 192.168.100.86 "./kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"
ssh 192.168.100.87 "./kafka_2.12-2.3.0/bin/zookeeper-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/zookeeper.properties"
./bin/kafka-server-start.sh -daemon ./config/server.properties
ssh 192.168.100.86 "./kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
ssh 192.168.100.87 "./kafka_2.12-2.3.0/bin/kafka-server-start.sh -daemon /opt/kafka_2.12-2.3.0/config/server.properties"
# ACL configuration script
cat > ./kfkacls.sh <<'EOF'
#!/bin/bash
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 \
--add --allow-principal User:producer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 \
--add --allow-principal User:producer --topic elk --producer --group chilu
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 \
--add --allow-principal User:consumer --allow-host 0.0.0.0 --operation Read --operation Write --topic elk
/opt/kafka_2.12-2.3.0/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=mykafka:2181 \
--add --allow-principal User:consumer --topic elk --consumer --group chilu
EOF
bash ./kfkacls.sh
</code>Logstash Service Deployment
Logstash 7.2.0 is installed with 30 GB heap, configured to read from Kafka using SASL/PLAIN and output to Elasticsearch over HTTPS with the generated certificates.
<code># Download and extract
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.2.0.tar.gz
tar -xvf logstash-7.2.0.tar.gz
# Increase JVM heap
sed -i -e 's/1g/30g/g' ./config/jvm.options
# Copy CA certificate
cp /home/elkuser/elasticsearch-7.2.0/root.pem ./config/
# Logstash configuration (logstash.yml)
cat > ./config/logstash.yml <<EOF
http.host: "192.168.100.83"
node.name: "logstash83"
xpack.monitoring.elasticsearch.hosts: ["https://192.168.100.83:9200"]
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.monitoring.elasticsearch.ssl.certificate_authority: config/root.pem
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
xpack.monitoring.collection.interval: 30s
xpack.monitoring.collection.pipeline.details.enabled: true
EOF
# Kafka client JAAS for Logstash
cat > ./config/kafka-client-jaas.conf <<EOF
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required username="consumer" password="chilu@rljie";
};
EOF
# Sample pipeline configuration (test.cfg)
cat > ./config/test.cfg <<EOF
input {
kafka {
bootstrap_servers => "192.168.100.83:9092,192.168.100.86:9092,192.168.100.87:9092"
client_id => "chilu83"
auto_offset_reset => "latest"
topics => "elk"
group_id => "chilu"
security_protocol => "SASL_PLAINTEXT"
sasl_mechanism => "PLAIN"
jaas_path => "/home/elkuser/logstash-7.2.0/config/kafka-client-jaas.conf"
}
}
filter {}
output {
elasticsearch {
hosts => ["192.168.4.1:9200","192.168.4.2:9200","192.168.4.3:9200"]
user => "elastic"
password => "NvOBRGpUE3DoaSbYaUp3"
ssl => true
cacert => "/home/elkuser/logstash-7.2.0/config/root.pem"
index => "chilu_elk%{+YYYY.MM.dd}"
}
}
EOF
# Start Logstash
../bin/logstash -r -f ./test.cfg
</code>Kibana Service Deployment
Kibana 7.2.0 is installed with 8 GB heap, secured with HTTPS using a self‑signed certificate, and configured to connect to the secured Elasticsearch cluster.
<code># Download and extract
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.2.0-linux-x86_64.tar.gz
tar -xvf kibana-7.2.0-linux-x86_64.tar.gz
# Increase Node.js memory limit
sed -i 's/warnings/warnings --max_old_space_size=8096/' ./bin/kibana
# Copy client certificate
cp /home/elkuser/elasticsearch-7.2.0/client.p12 ./config/
# Extract key and certs for Kibana
openssl pkcs12 -in client.p12 -nocerts -nodes > config/client.key
openssl pkcs12 -in client.p12 -clcerts -nokeys > config/client.cer
openssl pkcs12 -in client.p12 -cacerts -nokeys -chain > config/client-ca.cer
# Generate self‑signed server certificate
openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 3650 -out server.crt -subj "/C=CN/ST=guangzhou/L=rljie/O=chilu/OU=linux/"
# Kibana configuration (kibana.yml)
cat > ./config/kibana.yml <<EOF
server.name: kibana
server.host: "192.168.100.83"
elasticsearch.hosts: ["https://192.168.100.83:9200"]
elasticsearch.username: "elastic"
elasticsearch.password: "NvOBRGpUE3DoaSbYaUp3"
xpack.security.enabled: true
elasticsearch.ssl.certificateAuthorities: config/client-ca.cer
elasticsearch.ssl.verificationMode: certificate
xpack.security.encryptionKey: "4297f44b13955235245b2497399d7a93"
xpack.reporting.encryptionKey: "4297f44b13955235245b2497399d7a93"
server.ssl.enabled: true
server.ssl.certificate: server.crt
server.ssl.key: server.key
EOF
# Start Kibana in background
nohup ./bin/kibana --allow-root &
</code>Filebeat Service Deployment
Filebeat 7.2.0 is installed on each log source, configured to read log files and forward them to the Kafka cluster using SASL authentication.
<code># Download and extract
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.2.0-linux-x86_64.tar.gz
tar -xvf filebeat-7.2.0-linux-x86_64.tar.gz
# Filebeat configuration (filebeat.yml)
cat > ./filebeat.yml <<EOF
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/access.log
close_timeout: 1h
clean_inactive: 3h
ignore_older: 2h
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: true
setup.template.settings:
index.number_of_shards: 3
output.kafka:
hosts: ["192.168.100.83:9092","192.168.100.86:9092","192.168.100.87:9092"]
topic: elk
required_acks: 1
username: "producer"
password: "chilu@rljie"
EOF
# Start Filebeat in background
nohup ./filebeat -e -c filebeat.yml &
</code>The complete pipeline collects logs with Filebeat, buffers them in Kafka, processes them with Logstash, stores them in a secured Elasticsearch cluster, and visualizes them via Kibana. All components run with X‑Pack security, TLS encryption, and SASL authentication, providing a robust, production‑ready logging solution.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.