Cloud Native 9 min read

Deploying MinIO on Kubernetes and Configuring GitLab Runner S3 Cache

This guide explains how to deploy MinIO object storage on Kubernetes using a PersistentVolume and Helm, configure GitLab Runner to use MinIO as an S3 cache, troubleshoot common issues, and verify the setup with a sample CI pipeline.

DevOps Cloud Academy
DevOps Cloud Academy
DevOps Cloud Academy
Deploying MinIO on Kubernetes and Configuring GitLab Runner S3 Cache

MinIO is an open‑source object storage service compatible with the Amazon S3 API, suitable for storing large amounts of unstructured data such as images, videos, logs, backups, and container images. It can be easily integrated with other services like Node.js, Redis, or MySQL.

Kubernetes deployment : First create a PersistentVolume that points to a local directory (e.g., /data/devops/minio-data ) and apply it with kubectl create -f pv.yaml . Then install MinIO via Helm, customizing values.yml to set the service type, port, and ingress configuration. Example snippets:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ci-minio-pv
  namespace: devops
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/devops/minio-data"
---
helm install minio --namespace=devops \
  --set persistence.size=50Gi,persistence.VolumeName=ci-minio-pv,persistence.storageClass=manual ./minio

After installation, access MinIO inside the cluster via the DNS name minio.devops.svc.cluster.local or from localhost using port‑forwarding and the mc client.

# Export pod name
export POD_NAME=$(kubectl get pods --namespace devops -l "release=minio" -o jsonpath="{.items[0].metadata.name}")
# Port‑forward
kubectl port-forward $POD_NAME 9000 --namespace devops

Configuring GitLab Runner to use S3 storage : Create a Kubernetes secret that holds the S3 credentials, then edit the Runner Helm chart’s values.yml to enable the S3 cache and fill in the server address, bucket name, and secret name.

kubectl create secret generic s3access \
  --namespace=gitlab-runner \
  --from-literal=accesskey="AKIAIOSFODNN7EXAMPLE" \
  --from-literal=secretkey="wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
cache:
  cacheType: s3
  cachePath: "gitlab-runner"
  cacheShared: true
  s3ServerAddress: minio.devops.svc.cluster.local
  s3BucketName: gitlab-ci-runner-cache
  s3CacheInsecure: false
  secretName: s3access

Because the official chart ignored the s3CacheInsecure flag, the template was patched to use the actual value:

- name: CACHE_S3_INSECURE
  value: {{ default "" .Values.runners.cache.s3CacheInsecure | quote }}

After updating the chart with helm upgrade gitlab-runner ./gitlab-runner --namespace gitlab-runner , the Runner configuration can be verified inside the pod.

Pipeline test : A simple GitLab CI pipeline with build and test jobs demonstrates the cache. The build job creates a target/ directory, which is uploaded to MinIO. The subsequent test job restores the cache from MinIO, confirming that the S3 cache works correctly.

cache:
  paths:
    - target/
build:
  stage: build
  script:
    - mvn clean package
    - ls
test:
  stage: test
  script:
    - ls
    - ls target/

FAQ : Connection‑timeout errors are often caused by the s3CacheInsecure setting; using HTTP mode resolves the issue.

Finally, the article promotes a free cloud‑native public course covering GitLab CI pipeline optimization, Kubernetes Runner installation, and Java project pipelines.

KubernetesMinIOobject storageHelmGitLab RunnerS3 Cache
DevOps Cloud Academy
Written by

DevOps Cloud Academy

Exploring industry DevOps practices and technical expertise.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.