Cloud Native 23 min read

Master Loki Logging: Step-by-Step Kubernetes Deployment & Troubleshooting Guide

This comprehensive guide explains Loki's lightweight log aggregation architecture, compares it with ELK, details AllInOne, Helm, Kubernetes, and bare‑metal deployment methods, shows Promtail and Logstash integration, and provides practical troubleshooting tips for common issues.

Raymond Ops
Raymond Ops
Raymond Ops
Master Loki Logging: Step-by-Step Kubernetes Deployment & Troubleshooting Guide

1 Loki

1.1 Introduction

Loki is a lightweight log aggregation and analysis system that uses Promtail to collect logs and stores them for querying via Grafana datasource.

Supported storage backends include Azure, GCS, S3, Swift, and local, with S3 and local being most common. Loki also supports Logstash and Fluentbit as log collectors.

Advantages

Supports many clients: Promtail, Fluentbit, Fluentd, Vector, Logstash, Grafana Agent.

Promtail can collect logs from files, systemd, Windows event logs, Docker.

No fixed log format required (JSON, XML, CSV, logfmt, unstructured).

Log query language same as Prometheus.

Dynamic filtering and transformation during queries.

Easy metric extraction from logs.

Minimal indexing enables slice and chunk queries.

Cloud‑native, integrates with Prometheus.

Component Comparison

Name

Installed Components

Advantages

ELK/EFK

elasticsearch, logstash, kibana, filebeat, kafka/redis

Custom grok parsing, rich dashboards

Loki

grafana, loki, promtail

Low resource usage, native Grafana support, fast queries

1.2 Loki Working Principle

1.2.1 Log Parsing Format

Logs are indexed by timestamp and pod label; the rest is log content. Example queries shown.

Log parsing diagram
Log parsing diagram

Query example:

{app="loki",namespace="kube-public"}
Query result
Query result

1.2.2 Log Collection Architecture

Architecture diagram
Architecture diagram

Promtail is recommended as the agent, deployed as a DaemonSet on Kubernetes worker nodes.

1.2.3 Loki Deployment Modes

Loki consists of five micro‑services. The

memberlist_config

enables horizontal scaling. The

-target

flag selects mode:

all

(read/write),

read/write

(separate read and write nodes), or micro‑service mode.

all

: single instance handles reads and writes.

read/write

: query‑frontend forwards reads to read nodes; write nodes run distributor and ingester.

Micro‑service mode: each component runs in its own process.

1.3 Server‑Side Deployment

Requires a Kubernetes cluster.

1.3.1 AllInOne Deployment

1.3.1.1 Create ConfigMap

Save the full

loki-all.yaml

configuration to a file and create a ConfigMap:

<code>kubectl create configmap --from-file ./loki-all.yaml loki-all</code>

1.3.1.2 Create Persistent Storage

Define a PersistentVolume and PersistentVolumeClaim (example uses hostPath):

<code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: loki
spec:
  hostPath:
    path: /glusterfs/loki
    type: DirectoryOrCreate
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteMany
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: loki
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  volumeName: loki
</code>

1.3.1.3 Create StatefulSet

Deploy Loki with a StatefulSet that mounts the ConfigMap and PVC, exposes ports 3100 (HTTP) and 9095 (gRPC), and includes liveness/readiness probes.

<code>apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: loki
spec:
  replicas: 1
  selector:
    matchLabels:
      app: loki
  template:
    metadata:
      labels:
        app: loki
    spec:
      containers:
      - name: loki
        image: grafana/loki:2.5.0
        args:
        - -config.file=/etc/loki/loki-all.yaml
        ports:
        - containerPort: 3100
          name: http-metrics
        - containerPort: 9095
          name: grpc
        volumeMounts:
        - name: config
          mountPath: /etc/loki
        - name: storage
          mountPath: /data
      volumes:
      - name: config
        configMap:
          name: loki
      - name: storage
        persistentVolumeClaim:
          claimName: loki
</code>

1.3.1.4 Verify Deployment

Check that the pod reaches

Running

and that the distributor shows

Active

. Use the Loki API to confirm the ingester is ready.

Running status
Running status

1.3.2 Bare‑Metal Deployment

Place the

loki

binary in

/bin

, create

grafana-loki.service

, reload systemd and manage the service with

systemctl

.

<code>[Unit]
Description=Grafana Loki Log Ingester
After=network-online.target

[Service]
ExecStart=/bin/loki --config.file /etc/loki/loki-all.yaml
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target
</code>

1.4 Promtail Deployment

Promtail collects logs and pushes them to Loki.

1.4.1 Kubernetes Deployment

1.4.1.1 Create ConfigMap

Example

promtail.yaml

configuration:

<code>server:
  log_level: info
  http_listen_port: 3101
clients:
- url: http://loki:3100/loki/api/v1/push
positions:
  filename: /run/promtail/positions.yaml
scrape_configs:
- job_name: kubernetes-pods
  pipeline_stages:
  - cri: {}
  kubernetes_sd_configs:
  - role: pod
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_controller_name]
    regex: ([0-9a-z-.]+?)(-[0-9a-f]{8,10})?
    action: replace
    target_label: __tmp_controller_name
  # additional relabel rules omitted for brevity
</code>

1.4.1.2 Create DaemonSet

Deploy Promtail as a DaemonSet that mounts the ConfigMap and host paths for containers and pod logs.

<code>apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: promtail
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: promtail
  template:
    metadata:
      labels:
        app.kubernetes.io/name: promtail
    spec:
      containers:
      - name: promtail
        image: grafana/promtail:2.5.0
        args:
        - -config.file=/etc/promtail/promtail.yaml
        volumeMounts:
        - name: config
          mountPath: /etc/promtail
        - name: run
          mountPath: /run/promtail
        - name: containers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: pods
          mountPath: /var/log/pods
          readOnly: true
      volumes:
      - name: config
        configMap:
          name: promtail
      - name: run
        hostPath:
          path: /run/promtail
      - name: containers
        hostPath:
          path: /var/lib/docker/containers
      - name: pods
        hostPath:
          path: /var/log/pods
</code>

Apply with

kubectl apply -f promtail.yaml

and add Loki as a datasource in Grafana.

Grafana datasource
Grafana datasource

1.4.2 Bare‑Metal Promtail

Adjust the

clients

URL to the Loki host IP, store the config under

/etc/loki/

, and create a systemd service

loki-promtail.service

similar to the Loki service.

<code>[Unit]
Description=Grafana Loki Log Ingester
After=network-online.target

[Service]
ExecStart=/bin/promtail --config.file /etc/loki/loki-promtail.yaml
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s TERM $MAINPID

[Install]
WantedBy=multi-user.target
</code>

1.5 Data Source Configuration

In Grafana, navigate to Settings → Data Sources → Add Data Source → Loki. Use the service name (e.g.,

http://loki:3100

) as the URL; Grafana resolves the DNS inside the cluster.

Grafana datasource settings
Grafana datasource settings

1.6 Other Log Clients

1.6.1 Logstash

Install the Loki output plugin:

<code>bin/logstash-plugin install logstash-output-loki</code>

Configure Logstash output:

<code>output {
  loki {
    url => "http://loki:3100/loki/api/v1/push"
    # additional options...
  }
}
</code>

1.7 Helm Installation

Install Loki with Helm for a quick setup:

<code>helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
helm upgrade --install loki grafana/loki-simple-scalable
# optional namespace and custom values:
helm upgrade --install loki --namespace=loki grafana/loki-simple-scalable --set "key1=val1,key2=val2"
</code>

1.8 Troubleshooting

1.8.1 502 Bad Gateway

Check that the Loki URL is correct (e.g.,

http://loki

,

http://loki.namespace

, or with port). Verify network connectivity between Grafana and Loki.

1.8.2 Ingester not ready (JOINING)

AllInOne mode may need a few seconds to become ready.

1.8.3 Too many unhealthy instances

Set

ingester.lifecycler.replication_factor

to 1 for a single‑node deployment.

1.8.4 Data source connected but no labels

Ensure Promtail is correctly configured and able to send logs; delete

positions.yaml

if logs were sent before Loki was ready.

Original article link: https://www.cnblogs.com/jingzh/p/17998082 (copyright belongs to the author).

ObservabilityKubernetesloggingTroubleshootingHelmLokiPromtail
Raymond Ops
Written by

Raymond Ops

Linux ops automation, cloud-native, Kubernetes, SRE, DevOps, Python, Golang and related tech discussions.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.