How to Collect Nginx Access and Error Logs with Filebeat, Logstash, and Rsyslog
This guide demonstrates multiple ways to gather Nginx access and error logs—directly with Filebeat to Elasticsearch, via Filebeat to Logstash then Elasticsearch, and using rsyslog to forward logs to Logstash—providing step‑by‑step configurations, code snippets, and visual illustrations for each method.
1. Directly collect logs with Filebeat to Elasticsearch
Locate
filebeat.ymlin the Filebeat installation directory and configure the log file paths and Elasticsearch output.
<code>- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /usr/local/nginx/logs/*.log
</code>Configure the Elasticsearch hosts in the
output.elasticsearchsection and start Filebeat:
<code>./filebeat -e -c filebeat.yml -d "publish"</code>Use the
elasticsearch‑headplugin or Kibana to verify that both
access.logand
error.loghave been indexed.
2. Collect logs via Filebeat to Logstash, then to Elasticsearch
Install Logstash and create
filebeat-pipeline.conf:
<code>input {
beats {
port => "5044"
}
}
output {
elasticsearch { hosts => ["172.28.65.24:9200"] }
stdout { codec => rubydebug }
}
</code>Start Logstash with automatic config reload:
<code>bin/logstash -f filebeat-pipeline.conf --config.reload.automatic</code>Modify
filebeat.ymlto disable the Elasticsearch output and enable the Logstash output, pointing to the Logstash host and port.
Run Filebeat again and access the Nginx web service (e.g.,
http://172.28.65.32/). Logstash will display the incoming logs, and the data will appear in Elasticsearch and Kibana.
3. Collect logs via rsyslog to Logstash, then to Elasticsearch
When direct Filebeat installation is not possible, forward Nginx logs using syslog. Configure Nginx to send logs to a syslog server:
<code>access_log syslog:server=172.28.65.32:514,facility=local7,tag=nginx_access_log,severity=info;
error_log syslog:server=172.28.65.32:514,facility=local7,tag=nginx_error_log,severity=info;
</code>Create
syslog-pipeline.conffor Logstash to receive syslog data:
<code>input {
syslog {
type => "system-syslog"
port => 514
}
}
output {
elasticsearch { hosts => ["172.28.65.24:9200"] index => "system-syslog-%{+YYYY.MM}" }
stdout { codec => rubydebug }
}
</code>Start Logstash with the configuration and verify that it listens on TCP/UDP port 514.
Alternatively, configure rsyslog on the log‑collection server to read Nginx log files and forward them to Logstash:
<code>$IncludeConfig /etc/rsyslog.d/*.conf
$ModLoad imfile
$InputFilePollInterval 1
$WorkDirectory /var/spool/rsyslog
$PrivDropToGroup adm
$InputFileName /usr/local/nginx/logs/access.log
$InputFileTag nginx-access:
$InputFileStateFile stat-nginx-access
$InputFileSeverity info
$InputRunFileMonitor
$InputFileName /usr/local/nginx/logs/error.log
$InputFileTag nginx-error:
$InputFileStateFile stat-nginx-error
$InputFileSeverity error
$InputRunFileMonitor
*.* @172.28.65.32:514
</code>Restart rsyslog and access the Nginx service; Logstash will display the forwarded logs, which are also indexed in Elasticsearch.
All three methods provide flexible ways to ingest Nginx access and error logs into the ELK stack; choose the approach that best fits your environment.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.