Operations 17 min read

How to Combine ELK and Zabbix for Real‑Time Log Alerting

This guide explains how to integrate ELK's Logstash with Zabbix using the logstash‑output‑zabbix plugin, covering installation, configuration of Logstash pipelines, Zabbix template and trigger setup, and testing the end‑to‑end alerting workflow.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How to Combine ELK and Zabbix for Real‑Time Log Alerting

1. What is the relationship between ELK and Zabbix?

ELK (Elasticsearch, Logstash, Kibana) is a log‑collection suite that can gather system, website, and application logs, filter and cleanse them, and store them centrally for real‑time search and analysis.

When you need to extract abnormal log entries (warnings, errors, failures) and notify operators immediately, Zabbix can be used. Logstash reads logs, filters for keywords such as

error

,

failed

,

warning

, and forwards matching events to Zabbix via the

logstash-output-zabbix

plugin, which then triggers alerts.

2. Using Logstash with the Zabbix plugin

Logstash supports many output plugins; the

logstash-output-zabbix

plugin integrates Logstash with Zabbix. Install it with:

<code>[root@elk-master bin]# /usr/share/logstash/bin/logstash-plugin install logstash-output-zabbix</code>

Common plugin commands:

List installed plugins:

/usr/share/logstash/bin/logstash-plugin list

List with details:

/usr/share/logstash/bin/logstash-plugin list --verbose

List plugins matching a pattern:

/usr/share/logstash/bin/logstash-plugin list "*namefragment*"

List plugins of a specific group (e.g., output):

/usr/share/logstash/bin/logstash-plugin --group output

Install a plugin (e.g., Kafka):

/usr/share/logstash/bin/logstash-plugin install logstash-output-kafka

Update all plugins:

/usr/share/logstash/bin/logstash-plugin update

Update a specific plugin:

/usr/share/logstash/bin/logstash-plugin update logstash-output-kafka

Remove a plugin:

/usr/share/logstash/bin/logstash-plugin remove logstash-output-kafka

3. Example of using logstash-output-zabbix

After installing the plugin, add the following snippet to a Logstash configuration file:

<code>zabbix {
    zabbix_host => "[@metadata][zabbix_host]"
    zabbix_key => "[@metadata][zabbix_key]"
    zabbix_server_host => "x.x.x.x"
    zabbix_server_port => "xxxx"
    zabbix_value => "xxxx"
}</code>

Key fields:

zabbix_host : Zabbix host name (required).

zabbix_key : Item key in Zabbix (required).

zabbix_server_host : IP or hostname of the Zabbix server (default

localhost

).

zabbix_server_port : Port of the Zabbix server (default

10051

).

zabbix_value : Field whose value is sent to the Zabbix item (default

message

).

4. Integrating Logstash with Zabbix

Typical workflow: Logstash reads log files, filters for error keywords, and sends matching events to Zabbix, which then generates alerts.

4.1 Logstash pipeline configuration

Example

file_to_zabbix.conf

:

<code>input {
    file {
        path => "/var/log/secure"
        type => "system"
        start_position => "beginning"
    }
}

filter {
    grok {
        match => { "message" => "%{SYSLOGTIMESTAMP:message_timestamp} %{SYSLOGHOST:hostname} %{DATA:message_program}(?:\[%{POSINT:message_pid}\])?: %{GREEDYDATA:message_content}" }
    }
    mutate {
        add_field => ["[zabbix_key]","oslogs"]
        add_field => ["[zabbix_host]","Zabbix server"]
        remove_field => ["@version","message"]
    }
    date {
        match => ["message_timestamp","MMM d HH:mm:ss","MMM dd HH:mm:ss","ISO8601"]
    }
}

output {
    elasticsearch {
        index => "oslogs-%{+YYYY.MM.dd}"
        hosts => ["192.168.73.133:9200"]
        user => "elastic"
        password => "Goldwind@2019"
        sniffing => false
    }
    if [message_content] =~ /(ERR|error|ERROR|Failed)/ {
        zabbix {
            zabbix_host => "[zabbix_host]"
            zabbix_key => "[zabbix_key]"
            zabbix_server_host => "192.168.73.133"
            zabbix_server_port => "10051"
            zabbix_value => "message_content"
        }
    }
    #stdout { codec => rubydebug }
}</code>

Start Logstash with:

<code>[root@logstashserver ~]# cd /usr/local/logstash
[root@logstashserver logstash]# nohup bin/logstash -f config/file_to_zabbix.conf --path.data /tmp/ &</code>

4.2 Zabbix side configuration

Create a template named logstash-output-zabbix in Zabbix (Configuration → Templates → Create Template).

Zabbix template creation
Zabbix template creation

Create an application group under the template.

Create application group
Create application group

Create an item that receives the log content.

Create item
Create item

Link the template to the monitored host (e.g., 192.168.73.135) via Configuration → Hosts → select host → Templates → Add.

Link template to host
Link template to host

Generate a trigger that fires when the received data length is greater than 0.

Create trigger
Create trigger

Test the setup by causing a failed login on the monitored host; Logstash filters the

Failed

keyword and sends the log to Zabbix, which then triggers an alert (e.g., via DingTalk).

Alert example
Alert example

In Kibana you can also view the original logs.

Kibana log view
Kibana log view

Summary

The architecture remains: Filebeat collects logs, Logstash processes and forwards them to both Elasticsearch/Kibana and Zabbix via the

logstash-output-zabbix

plugin. Ensure Filebeat’s source IP matches the Zabbix host IP; otherwise logs won’t be received. A quick test command using

zabbix_sender

can verify the Zabbix key configuration.

<code># Test sending a value to Zabbix from the server
[root@localhost zabbix_sender]# /usr/local/zabbix/bin/zabbix_sender -s 192.168.73.135 -z 192.168.73.133 -k "oslogs" -o 1
info from server: "processed: 1; failed: 0; total: 1; seconds spent: 0.000081"
sent: 1; skipped: 0; total: 1</code>

Parameters:

-s

specifies the local agent,

-z

the Zabbix server, and

-k

the item key.

operationsalertingELKLog MonitoringLogstashZabbix
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.