How to Build a Go Exporter for OVS Bond Metrics with Prometheus and Kubernetes
This article walks through building a Go exporter that collects OVS bond status via shell commands, formats the data as Prometheus metrics, and deploys the service in a Kubernetes DaemonSet, providing step‑by‑step code, Dockerfile, and testing instructions.
Introduction
To avoid manually checking OVS bond interfaces on a Kubernetes node, the author decided to write a Go program that gathers bond status via shell commands, exposes the information as Prometheus metrics, and visualizes it in Grafana.
Environment
k8s v1.14
ovs v2.9.5
go 1.14.1
Goal
Expose OVS bond status (enabled/disabled) as a Prometheus metric, using numeric values to indicate the number of disabled interfaces and a special value (5) for command execution failures.
Design
Metrics must follow Prometheus format.
Each bond has two interfaces; use numbers 0‑4 to represent disabled interfaces and 5 for errors.
Process command output and convert it to metric labels (labels cannot contain hyphens).
Store interface name and status in a map.
Leverage
client_golang/prometheuslibrary.
Implementation
Shell command to fetch bond information
<code># 现获取当前bond信息
[root@test~]$ ovs-appctl bond/show |grep '^slave' |grep -v grep |awk '{print $2""$3}'
a1-b1:enabled
a2-b2:enabled
a3-b3:enabled
a4-b4:disabled</code>Go code
<code>// getBondStatus executes the shell command and returns a map[string]string
func getBondStatus() (m map[string]string) {
result, err := exec.Command("bash", "-c", "ovs-appctl bond/show | grep '^slave' | grep -v grep | awk '{print $2\"\"$3}'").Output()
if err != nil {
log.Error("result: ", string(result))
log.Error("command failed: ", err.Error())
m = make(map[string]string)
m["msg"] = "failure"
return m
}
if len(result) == 0 {
log.Error("command exec failed, result is null")
m = make(map[string]string)
m["msg"] = "return null"
return m
}
ret := strings.TrimSpace(string(result))
tt := strings.Split(ret, "\n")
nMap := make(map[string]string)
for i := 0; i < len(tt); i++ {
if strings.Contains(tt[i], "-") {
nKey := strings.Split(strings.Split(tt[i], ":")[0], "-")
nMap[strings.Join(nKey, "")] = strings.Split(tt[i], ":")[1]
} else {
nMap[strings.Split(tt[i], ":")[0]] = strings.Split(tt[i], ":")[1]
}
}
return nMap
}
// Collector struct
type ovsCollector struct {
ovsMetric *prometheus.Desc
}
func (c *ovsCollector) Describe(ch chan<- *prometheus.Desc) {
ch <- c.ovsMetric
}
var (
vLabel []string
vValue []string
constLabel = prometheus.Labels{"component": "ovs"}
)
func newOvsCollector() *ovsCollector {
rm := getBondStatus()
if _, ok := rm["msg"]; ok {
log.Error("command execute failed:", rm["msg"])
} else {
for k := range rm {
vLabel = append(vLabel, k)
}
}
return &ovsCollector{ovsMetric: prometheus.NewDesc(
"ovs_bond_status",
"Show ovs bond status",
vLabel,
constLabel,
)}
}
func (c *ovsCollector) Collect(ch chan<- prometheus.Metric) {
var metricValue float64
rm := getBondStatus()
if _, ok := rm["msg"]; ok {
log.Error("command exec failed")
metricValue = 5
ch <- prometheus.MustNewConstMetric(c.ovsMetric, prometheus.CounterValue, metricValue)
} else {
vValue = vValue[:0]
for _, v := range rm {
vValue = append(vValue, v)
if v == "disabled" {
metricValue++
}
}
ch <- prometheus.MustNewConstMetric(c.ovsMetric, prometheus.CounterValue, metricValue, vValue...)
}
}
func main() {
ovs := newOvsCollector()
prometheus.MustRegister(ovs)
http.Handle("/metrics", promhttp.Handler())
log.Info("begin to server on port 8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
</code>Deployment
Dockerfile
<code>FROM golang:1.14.1 AS builder
WORKDIR /go/src
COPY ./ .
RUN go build -o ovs_check main.go
# runtime
FROM centos:7.7
COPY --from=builder /go/src/ovs_check /xiyangxixia/ovs_check
ENTRYPOINT ["/xiyangxixia/ovs_check"]
</code>Kubernetes DaemonSet (yaml excerpt)
<code>apiVersion: apps/v1
kind: DaemonSet
metadata:
name: ovs-agent
namespace: kube-system
spec:
selector:
matchLabels:
name: ovs-agent
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/metrics"
labels:
name: ovs-agent
spec:
containers:
- name: ovs-agent
image: ovs_bond:v1
ports:
- containerPort: 8080
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: ovs-run
mountPath: /var/run/openvswitch
- name: ovs-bin
mountPath: /usr/bin/ovs-appctl
subPath: ovs-appctl
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: ovs-run
hostPath:
path: /var/run/openvswitch
- name: ovs-bin
hostPath:
path: /usr/bin/
</code>Testing
<code># Verify pod is running
kubectl get po -n kube-system -o wide | grep ovs
# Query metrics
curl 10.211.55.41:8080/metrics | grep ovs_bond
# Example output
# HELP ovs_bond_status Show ovs bond status
# TYPE ovs_bond_status counter
ovs_bond_status{component="ovs",a1b1="enabled",a2b2="enabled",a3b3="enabled",a4b4="enabled"} 0
</code>Conclusion
The guide demonstrates how to create a lightweight Go exporter for OVS bond status, package it into a container, deploy it as a DaemonSet in Kubernetes, and verify the Prometheus metrics, providing a practical example for monitoring network interfaces.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.