Operations 9 min read

Common ELK Deployment Architectures and Practical Solutions for Log Management

This article explains the ELK stack components, compares three typical deployment architectures—including Logstash‑only, Filebeat‑based, and a Kafka‑backed version—and provides detailed configuration examples for multiline log merging, timestamp replacement, and module‑based log filtering.

Code Ape Tech Column
Code Ape Tech Column
Code Ape Tech Column
Common ELK Deployment Architectures and Practical Solutions for Log Management

ELK (Beats, Logstash, Elasticsearch, Kibana) is a popular centralized logging solution; this article introduces its components and three typical deployment architectures: Logstash as collector, Filebeat as collector, and a version with a caching queue (Kafka) to handle large volumes.

It compares resource usage, noting that Logstash is heavy, while Filebeat is lightweight and commonly paired with Logstash; the queue‑based architecture adds Kafka to balance load and improve reliability.

The article then addresses three common problems and provides concrete solutions:

1. Multiline log merging – Use the multiline plugin in Filebeat or Logstash. Example Filebeat configuration:

filebeat.prospectors:
  - paths:
      - /home/project/elk/logs/test.log
    input_type: log
    multiline:
      pattern: '^\['
      negate: true
      match: after
output:
  logstash:
    hosts: ["localhost:5044"]

Example Logstash configuration:

input {
  beats {
    port => 5044
  }
}
filter {
  multiline {
    pattern => "%{LOGLEVEL}\s*]"
    negate => true
    what => "previous"
  }
}
output {
  elasticsearch {
    hosts => "localhost:9200"
  }
}

2. Replacing Kibana’s @timestamp with the log’s own time – Use grok and date filters in Logstash. Example configuration:

filter {
  grok {
    match => ["message", "(?
%{YEAR}%{MONTHNUM}%{MONTHDAY}\s+%{TIME})"]
  }
  date {
    match => ["customer_time", "yyyyMMdd HH:mm:ss,SSS"]
    target => "@timestamp"
  }
}

3. Filtering logs by system module in Kibana – Add a custom field (e.g., log_from ) in Filebeat or use separate indices. Example Filebeat snippet:

filebeat.prospectors:
  - paths:
      - /home/project/elk/logs/account.log
    input_type: log
    multiline:
      pattern: '^\['
      negate: true
      match: after
    fields:
      log_from: account
  - paths:
      - /home/project/elk/logs/customer.log
    input_type: log
    multiline:
      pattern: '^\['
      negate: true
      match: after
    fields:
      log_from: customer
output:
  logstash:
    hosts: ["localhost:5044"]

Corresponding Logstash output to route to different indices:

output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "%{type}"
  }
}

Each solution includes full configuration snippets wrapped in tags.

Finally, the article concludes that the Filebeat‑Logstash‑Elasticsearch‑Kibana stack, especially the second architecture, is the most widely used, and that ELK can also serve for monitoring and other scenarios.

ElasticsearchConfigurationELKLog ManagementLogstashKibanaFilebeatMultiline
Code Ape Tech Column
Written by

Code Ape Tech Column

Former Ant Group P8 engineer, pure technologist, sharing full‑stack Java, job interview and career advice through a column. Site: java-family.cn

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.