Configuring Prometheus Alertmanager for Email Alerts and Advanced Templates
This guide explains how to install, configure, and run Prometheus Alertmanager with Docker, set up routing and receivers, integrate it with Prometheus alert rules, test alerts, customize email templates, and optimize notification settings for reliable monitoring and alerting.
Introduction
The article introduces Prometheus as an open‑source monitoring system and Alertmanager as its companion component for handling alerts and notifications.
Alertmanager Overview
Alertmanager receives alerts from Prometheus, routes them based on labels, supports multiple notification channels (email, Slack, webhook, etc.), de‑duplicates, groups alerts, provides team collaboration features, and offers high‑availability deployment.
Alertmanager Configuration
Download
docker pull prom/alertmanager:v0.25.0Configuration File
Create the directory data/prometheus/alertmanager and place an alertmanager.yml file there. The file defines a top‑level route and a list of receivers that determine where alerts are sent.
route:
group_by: ['alertname']
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receiver: 'web.hook'
receivers:
- name: 'web.hook'
webhook_configs:
- url: 'http://127.0.0.1:5001/'
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'dev', 'instance']The configuration routes alerts, defines a webhook receiver, and sets inhibition rules to suppress lower‑severity alerts when a critical one is active.
Start Alertmanager
docker run --name alertmanager -d -p 9093:9093 -v /data/prometheus/alertmanager:/etc/alertmanager prom/alertmanager:v0.25.0After starting, the UI is reachable at http://127.0.0.1:9093 , where the "Alert" and "Silences" tabs allow inspection and management of alerts.
Linking Prometheus with Alertmanager
Add the following to prometheus.yml to tell Prometheus where Alertmanager lives (replace the IP with your host’s address):
alerting:
alertmanagers:
- static_configs:
- targets: ['10.211.55.2:9093']Reload or restart Prometheus ( docker restart prometheus ) and verify the configuration via http://localhost:9090/config and http://localhost:9090/rules .
Alert Rule Configuration
Create a directory /data/prometheus/rules and add a rule file, e.g., hoststats-alert.rules :
groups:
- name: hostStatsAlert
rules:
- alert: hostCpuUsageAlert
expr: sum(avg without (cpu) (irate(node_cpu_seconds_total{mode!='idle'}[5m]))) by (instance) > 0.85
for: 1m
labels:
severity: page
annotations:
summary: "Instance {{ $labels.instance }} CPU usage high"
description: "{{ $labels.instance }} CPU usage above 85% (current value: {{ $value }})"
- alert: hostMemUsageAlert
expr: (node_memory_MemTotal - node_memory_MemAvailable) / node_memory_MemTotal > 0.85
for: 1m
labels:
severity: page
annotations:
summary: "Instance {{ $labels.instance }} MEM usage high"
description: "{{ $labels.instance }} MEM usage above 85% (current value: {{ $value }})"Reload Prometheus to apply the new rules.
Email Alert Configuration
Update alertmanager.yml with SMTP settings (replace credentials with real ones):
global:
smtp_smarthost: smtp.qq.com:465
smtp_from: [email protected]
smtp_auth_username: [email protected]
smtp_auth_identity: [email protected]
smtp_auth_password: 123
smtp_require_tls: false
route:
group_by: ['alertname']
receiver: 'default-receiver'
group_wait: 30s
group_interval: 5m
repeat_interval: 1h
receivers:
- name: default-receiver
email_configs:
- to: [email protected]
send_resolved: trueRestart Alertmanager ( docker restart alertmanager ) and verify the configuration at http://127.0.0.1:9093/#/status .
Testing Alerts
Generate load on the monitored host with cat /dev/zero > /dev/null , then query Prometheus to see the alert fire. The UI will show the alert in the "Firing" state and forward it to Alertmanager, which sends an email.
Custom Email Templates
Create a template file /data/prometheus/alertmanager/notify-template.tmpl containing Go templating syntax, for example:
{{ define "test.html" }}
{{ range .Alerts }}
=========start==========
Alert Level: {{ .Labels.severity }}
Alert Type: {{ .Labels.alertname }}
Host: {{ .Labels.instance }}
Summary: {{ .Annotations.summary }}
Start Time: {{ .StartsAt.Format "2006-01-02 15:04:05" }}
=========end==========
{{ end }}
{{ end }}Add the template path to alertmanager.yml and reference it in the email config:
templates:
- '/etc/alertmanager/notify-template.tmpl'
receivers:
- name: default-receiver
email_configs:
- to: [email protected]
html: '{{ template "test.html" . }}'
send_resolved: trueTemplate Optimizations
Enhance the template to distinguish firing and resolved alerts, adjust timestamps to Beijing time (+8 h), and add a custom subject header that changes to "System Monitoring Alert Resolved" when the alert is resolved.
global:
smtp_smarthost: smtp.qq.com:465
...
templates:
- '/etc/alertmanager/notify-template.tmpl'
receivers:
- name: default-receiver
email_configs:
- to: [email protected]
html: '{{ template "test.html" . }}'
send_resolved: true
headers: { Subject: "System Monitoring Alert{{- if gt (len .Alerts.Resolved) 0 -}} Resolved{{ end }}" }Conclusion
The article demonstrated how to deploy Alertmanager, configure routing and receivers, integrate it with Prometheus alert rules, test alerts, and refine email notifications through custom templates, laying the groundwork for comprehensive monitoring and alerting in future chapters.
Wukong Talks Architecture
Explaining distributed systems and architecture through stories. Author of the "JVM Performance Tuning in Practice" column, open-source author of "Spring Cloud in Practice PassJava", and independently developed a PMP practice quiz mini-program.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.