Cloud Native 16 min read

How to Manually Deploy Grafana on Kubernetes with Persistent Storage and Ingress

This guide walks you through deploying Grafana as a StatefulSet on Kubernetes using a custom StorageClass for data persistence, configuring service accounts, ConfigMaps, Secrets, and exposing the service via ingress‑nginx alongside Prometheus and Alertmanager ingress resources.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How to Manually Deploy Grafana on Kubernetes with Persistent Storage and Ingress

Environment

The tutorial uses a local Kubernetes 1.17.7 cluster on Ubuntu 18.04 nodes provisioned with

sealos

for quick testing.

Deploy Grafana

Create the ServiceAccount:

<code>apiVersion: v1
kind: ServiceAccount
metadata:
  name: grafana
  namespace: kube-system
</code>

Create the StorageClass for persistent data:

<code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: grafana-lpv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code>

Create the PersistentVolume:

<code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: grafana-pv-0
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: grafana-lpv
  local:
    path: /data/grafana-data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - sealos-k8s-m2
</code>

Prepare the data directory on the chosen node:

<code>mkdir /data/grafana-data
chown -R 65534:65534 /data/grafana-data
</code>

Create ConfigMaps for dashboards and datasources (adjust the Prometheus DNS address as needed):

<code># grafana-dashboard-configmap.yaml (content omitted for brevity)
# grafana-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-datasources
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana

data:
  datasources.yaml: |
    apiVersion: 1
    datasources:
    - access: proxy
      isDefault: true
      name: prometheus
      type: prometheus
      url: http://prometheus-0.prometheus.kube-system.svc.cluster.local:9090
      version: 1
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboardproviders
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana

data:
  dashboardproviders.yaml: |
    apiVersion: 1
    providers:
    - name: default
      orgId: 1
      folder: ""
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards
</code>

Optional Secret (replace values if you prefer to use a secret):

<code>apiVersion: v1
kind: Secret
metadata:
  name: grafana-secret
  namespace: kube-system
  labels:
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
type: Opaque
data:
  admin-user: YWRtaW4=
  admin-password: "123456"
</code>

Create the StatefulSet that mounts the PV and ConfigMaps:

<code>apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: grafana
  namespace: kube-system
  labels:
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
spec:
  serviceName: grafana
  replicas: 1
  selector:
    matchLabels:
      k8s-app: grafana
  template:
    metadata:
      labels:
        k8s-app: grafana
    spec:
      serviceAccountName: grafana
      initContainers:
      - name: "init-chmod-data"
        image: debian:9
        imagePullPolicy: "IfNotPresent"
        command: ["chmod", "777", "/var/lib/grafana"]
        volumeMounts:
        - name: grafana-data
          mountPath: "/var/lib/grafana"
      containers:
      - name: grafana
        image: grafana/grafana:7.1.0
        imagePullPolicy: Always
        volumeMounts:
        - name: dashboards
          mountPath: "/var/lib/grafana/dashboards"
        - name: datasources
          mountPath: "/etc/grafana/provisioning/datasources"
        - name: grafana-dashboardproviders
          mountPath: "/etc/grafana/provisioning/dashboards"
        - name: grafana-data
          mountPath: "/var/lib/grafana"
        ports:
        - name: service
          containerPort: 80
          protocol: TCP
        - name: grafana
          containerPort: 3000
          protocol: TCP
        env:
        - name: GF_SECURITY_ADMIN_USER
          value: "admin"
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: "admin"
        livenessProbe:
          httpGet:
            path: /api/health
            port: 3000
        readinessProbe:
          httpGet:
            path: /api/health
            port: 3000
          initialDelaySeconds: 60
          timeoutSeconds: 30
          failureThreshold: 10
          periodSeconds: 10
        resources:
          limits:
            cpu: 50m
            memory: 100Mi
          requests:
            cpu: 50m
            memory: 100Mi
      volumes:
      - name: datasources
        configMap:
          name: grafana-datasources
      - name: grafana-dashboardproviders
        configMap:
          name: grafana-dashboardproviders
      - name: dashboards
        configMap:
          name: grafana-dashboards
  volumeClaimTemplates:
  - metadata:
      name: grafana-data
    spec:
      storageClassName: "grafana-lpv"
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: "2Gi"
</code>

Create the Service for the StatefulSet:

<code>apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    k8s-app: grafana
    app.kubernetes.io/name: grafana
    app.kubernetes.io/component: grafana
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3000
  selector:
    k8s-app: grafana
</code>

Apply all Grafana manifests:

<code>cd /data/manual-deploy/grafana
kubectl apply .
</code>

Verify the resources:

<code>kubectl -n kube-system get sa,pod,svc,ep,sc,secret | grep grafana
</code>

Deploy ingress‑nginx

Define the namespace and Service:

<code>apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
</code>

Provide mandatory ConfigMaps, ServiceAccount, RBAC roles, and the Deployment (3 replicas, hostNetwork enabled):

<code># Omitted for brevity – includes ConfigMaps (nginx-configuration, tcp-services, udp-services), ServiceAccount, ClusterRole, Role, RoleBinding, ClusterRoleBinding, and Deployment with args pointing to the ConfigMaps and hostNetwork: true.
</code>

Apply the ingress‑nginx manifests:

<code>cd /data/manual-deploy/ingress-nginx
kubectl apply -f .
</code>

Verify the ingress‑nginx pods and service:

<code>kubectl -n ingress-nginx get pod,svc,ep
</code>

Ingress resources for monitoring services

Prometheus ingress:

<code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: prometheus-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "prometheus-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: prom.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: prometheus
          servicePort: 9090
  tls:
  - hosts:
    - prom.example.com
</code>

Alertmanager ingress:

<code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: alertmanager-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "alert-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: alert.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: alertmanager-operated
          servicePort: 9093
  tls:
  - hosts:
    - alert.example.com
</code>

Grafana ingress:

<code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: grafana-ingress
  namespace: kube-system
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-name: "grafana-cookie"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    kubernetes.io/ingress.class: nginx
    certmanager.k8s.io/cluster-issuer: "letsencrypt-local"
    kubernetes.io/tls-acme: "false"
spec:
  rules:
  - host: grafana.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: grafana
          servicePort: http
  tls:
  - hosts:
    - grafana.example.com
</code>

Apply the ingress manifests and verify:

<code>kubectl apply -f alertmanager-ingress.yaml
kubectl apply -f prometheus-ingress.yaml
kubectl apply -f grafana-ingress.yaml
kubectl -n kube-system get ingresses
</code>

Update your local DNS or hosts file to map the hostnames to the cluster IP, and optionally configure TLS certificates if needed.

All steps complete; ensure component versions remain compatible to avoid unknown issues.

deploymentkubernetesingressGrafanaStatefulSetStorageClass
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.