Using Velero for Kubernetes Backup and Restore with MinIO
This guide explains how to use Velero to back up and restore Kubernetes clusters, covering its principles, backup types, storage configuration, installation of Velero and MinIO, and step‑by‑step commands for creating, testing, and recovering a MySQL workload.
Velero (https://velero.io) provides backup and restore capabilities for Kubernetes cluster resources and persistent volumes, allowing data protection, migration, and cloning of clusters in public or private cloud environments.
Principle
All Velero operations (on‑demand backup, scheduled backup, restore) are represented as CRD custom resources. Velero can back up or restore all objects, with optional filtering by type, namespace, or label. It is suitable for disaster recovery and for taking application snapshots before system upgrades.
On‑demand backup creates compressed archives of Kubernetes objects and optionally snapshots persistent volumes. Backup hooks can be defined to run custom actions before snapshots.
Scheduled backup uses a schedule CRD; backups are triggered according to a Cron expression. Backup names follow the pattern <SCHEDULE NAME>-<TIMESTAMP> where YYYYMMDDhhmmss is the timestamp.
Restore can recreate all objects and volumes from a backup, with options for namespace remapping and selective restoration. Restore names follow <BACKUP NAME>-<TIMESTAMP> . Storage locations can be set to read‑only during restore to prevent accidental changes.
Backup Flow
When executing velero backup create test-backup , the client creates a Backup CRD, the controller validates it, gathers resources from the API server, and uploads the backup archive to an object store (e.g., S3).
Backups support volume snapshots via the --snapshot-volumes flag (default enabled). The --ttl flag can set a time‑to‑live for automatic deletion of expired backups, associated files, and snapshots.
Backup Storage and Volume Snapshot Locations
Velero defines two custom resources: BackupStorageLocation (where backup data is stored) and VolumeSnapshotLocation (where volume snapshots are stored). Multiple locations can be configured to support different clouds, regions, or on‑prem storage.
Installation
Download the Velero binary from the GitHub releases page (e.g., v1.6.3 ) and place it in /usr/local/bin :
wget https://github.com/vmware-tanzu/velero/releases/download/v1.6.3/velero-v1.6.3-darwin-amd64.tar.gz
tar -zxvf velero-v1.6.3-darwin-amd64.tar.gz && cd velero-v1.6.3-darwin-amd64
cp velero /usr/local/bin && chmod +x /usr/local/bin/velero
velero versionDeploy MinIO as an S3‑compatible object store inside the cluster using the provided examples/minio/00-minio-deployment.yaml manifest, adjusting the Service to NodePort and creating a systemd service for external deployment if needed.
Configure a credentials file ( credentials-velero ) with MinIO access keys, then install Velero with:
velero install \
--provider aws \
--bucket velero \
--image velero/velero:v1.6.3 \
--plugins velero/velero-plugin-for-aws:v1.2.1 \
--namespace velero \
--secret-file ./credentials-velero \
--use-volume-snapshots=false \
--use-restic \
--backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio.velero.svc:9000Test
Deploy a MySQL workload in a new namespace, then create a backup that includes the namespace and uses Restic for volume snapshots:
velero backup create mysql-backup --include-namespaces kube-demo --default-volumes-to-resticAfter the backup reaches Completed , delete the namespace to simulate a disaster, and restore the workload from the backup:
velero restore create --from-backup mysql-backupVerify that the restored namespace, MySQL pod, and the velero database are present, confirming successful recovery.
Velero can also migrate resources between clusters by pointing multiple instances to the same object store, and supports advanced features such as backup hooks and scheduled backups.
DevOps Cloud Academy
Exploring industry DevOps practices and technical expertise.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.