Cloud Native 23 min read

How KubeVela Enables Seamless GitOps for Multi‑Cloud Application Delivery

This guide explains how KubeVela, as a declarative application delivery control plane, leverages GitOps with FluxCD to provide end‑to‑end workflows, infrastructure and application deployment, multi‑cloud strategies, and automated CI/CD pipelines for both platform operators and developers.

Ops Development Stories
Ops Development Stories
Ops Development Stories
How KubeVela Enables Seamless GitOps for Multi‑Cloud Application Delivery

KubeVela as a Declarative Application Delivery Control Plane

KubeVela can be used in a GitOps manner, offering additional benefits and end‑to‑end experiences such as:

Application delivery workflow (CD pipeline): supports procedural delivery in GitOps, not just declaring the final state.

Handles dependencies and topology during deployment.

Provides a unified abstraction over existing GitOps tools, simplifying delivery and management.

Unified declaration, deployment, and binding of cloud services.

Out‑of‑the‑box delivery strategies (canary, blue‑green, etc.).

Out‑of‑the‑box hybrid/multi‑cloud deployment strategies (placement rules, cluster filters).

Kustomize‑style patches for multi‑environment delivery without learning Kustomize details.

GitOps mode relies on the FluxCD addon, which must be enabled before using GitOps:

<code>vela addon enable fluxcd</code>

The GitOps workflow consists of

CI

and

CD

parts:

CI

: builds code, creates images, and pushes them to a registry. Any CI tool (GitHub Actions, Travis, Jenkins, Tekton, etc.) can be integrated.

CD

: automatically updates cluster configurations with the latest images. Two CD approaches exist:

Push‑Based : CI pipelines push changes to the cluster using shared credentials.

Pull‑Based : the cluster pulls changes from the repository, avoiding credential exposure (e.g., Argo CD, Flux CD).

There are two target audiences:

Platform administrators/operations staff who manage infrastructure by updating configuration files in a Git repo.

End developers who trigger automatic updates of applications after code changes.

Delivery for Platform Administrators / Operations

Administrators only need a Git configuration repository and KubeVela configuration files. Updating the repo automatically propagates changes to the cluster.

The repository structure is:

clusters/

: contains KubeVela GitOps configs that must be manually applied once to the cluster.

apps/

: holds application configurations (e.g., a MySQL‑backed service).

infrastructure/

: holds infrastructure configs such as a MySQL database.

<code>├── apps
│   └── my-app.yaml
├── clusters
│   ├── apps.yaml
│   └── infra.yaml
└── infrastructure
    └── mysql.yaml</code>
KubeVela recommends this directory layout: clusters/ for GitOps configs, apps/ for applications, and infrastructure/ for infrastructure.

clusters/ directory

Example

clusters/infra.yaml

:

<code>apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: infra
spec:
  components:
  - name: database-config
    type: kustomize
    properties:
      repoType: git
      # replace with your git repo URL
      url: https://github.com/cnych/KubeVela-GitOps-Infra-Demo
      # if private, set git secret
      # secretRef: git-secret
      pullInterval: 10m
      git:
        branch: main
        path: ./infrastructure</code>

The

apps.yaml

file is similar but watches

./apps

instead of

./infrastructure

.

After applying the

clusters/

files, KubeVela automatically watches

apps/

and

infrastructure/

for changes and syncs them to the cluster.

apps/ directory

The

apps/

contains a simple webservice that connects to MySQL, exposes a version endpoint and a

/db

endpoint.

<code>apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: my-app
  namespace: default
spec:
  components:
  - name: my-server
    type: webservice
    properties:
      image: cnych/kubevela-gitops-demo:main-76a34322-1697703461
      port: 8088
      env:
      - name: DB_HOST
        value: mysql-cluster-mysql.default.svc.cluster.local:3306
      - name: DB_PASSWORD
        valueFrom:
          secretKeyRef:
            name: mysql-secret
            key: ROOT_PASSWORD
    traits:
    - type: scaler
      properties:
        replicas: 1
    - type: gateway
      properties:
        class: nginx
        classInSpec: true
        domain: vela-gitops-demo.k8s.local
        http:
          /: 8088
        pathType: ImplementationSpecific</code>

This uses the built‑in

webservice

component type and the

gateway

trait to automatically create Deployment, Service, and Ingress.

infrastructure/ directory

It deploys a MySQL cluster via the MySQL operator:

<code>apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  name: mysql
  namespace: default
spec:
  components:
  - name: mysql-secret
    type: k8s-objects
    properties:
      objects:
      - apiVersion: v1
        kind: Secret
        metadata:
          name: mysql-secret
        type: Opaque
        stringData:
          ROOT_PASSWORD: root321
  - name: mysql-operator
    type: helm
    properties:
      repoType: helm
      url: https://helm-charts.bitpoke.io
      chart: mysql-operator
      version: 0.6.3
  - name: mysql-cluster
    type: raw
    dependsOn:
    - mysql-operator
    - mysql-secret
    properties:
      apiVersion: mysql.presslabs.org/v1alpha1
      kind: MysqlCluster
      metadata:
        name: mysql-cluster
      spec:
        replicas: 1
        secretName: mysql-secret</code>

Deploying

clusters/infra.yaml

creates the MySQL infrastructure:

<code>$ kubectl apply -f clusters/infra.yaml
$ vela ls
APP   COMPONENT        TYPE       TRAITS   PHASE   HEALTHY   STATUS   CREATED‑TIME
infra database-config  kustomize          running healthy   2023‑10‑19 …
mysql mysql-operator   helm               running healthy   …
      mysql-cluster    raw                running healthy   …</code>

Pods for MySQL and the operator become ready.

Delivery for End Developers

Developers maintain a separate code repository (e.g.,

https://github.com/cnych/KubeVela-GitOps-App-Demo

) containing source code and a Dockerfile. After code changes, a CI pipeline builds and pushes a new image.

Example Go server code (simplified):

<code>http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
    _, _ = fmt.Fprintf(w, "Version: %s\n", VERSION)
})
http.HandleFunc("/db", func(w http.ResponseWriter, r *http.Request) {
    rows, err := db.Query("select * from userinfo;")
    if err != nil {
        _, _ = fmt.Fprintf(w, "Error: %v\n", err)
    }
    for rows.Next() {
        var username, desc string
        err = rows.Scan(&username, &desc)
        if err != nil {
            _, _ = fmt.Fprintf(w, "Scan Error: %v\n", err)
        }
        _, _ = fmt.Fprintf(w, "User: %s \nDescription: %s\n\n", username, desc)
    }
})
if err := http.ListenAndServe(":8088", nil); err != nil {
    panic(err.Error())
}</code>

Jenkins is used as the CI system. After creating a webhook pointing to a Jenkins trigger, a pipeline named

KubeVela-GitOps-App-Demo

is defined (GitHub hook trigger for GITScm polling enabled).

Pipeline script (truncated for brevity):

<code>void setBuildStatus(String message, String state) {
    step([
        $class: "GitHubCommitStatusSetter",
        reposSource: [$class: "ManuallyEnteredRepositorySource", url: "https://github.com/cnych/KubeVela-GitOps-App-Demo"],
        contextSource: [$class: "ManuallyEnteredCommitContextSource", context: "ci/jenkins/deploy-status"],
        errorHandlers: [[ $class: "ChangingBuildStatusErrorHandler", result: "UNSTABLE"]],
        statusResultSource: [$class: "ConditionalStatusResultSource", results: [[ $class: "AnyBuildResult", message: message, state: state]]]
    ])
}
pipeline {
    agent { kubernetes { cloud 'Kubernetes' defaultContainer 'jnlp' yaml '''
        spec:
          serviceAccountName: jenkins
          containers:
          - name: golang
            image: golang:1.16-alpine3.15
            command: [cat]
            tty: true
          - name: docker
            image: docker:latest
            command: [cat]
            tty: true
            env:
            - name: DOCKER_HOST
              value: tcp://docker-dind:2375
''' } }
    stages {
        stage('Prepare') {
            steps { script { /* checkout, set env vars, set pending status */ } }
        }
        stage('Test') { steps { container('golang') { sh 'go test *.go' } } }
        stage('Build') {
            steps { withCredentials([[ $class: "UsernamePasswordMultiBinding", credentialsId: "docker-auth", usernameVariable: "DOCKER_USER", passwordVariable: "DOCKER_PASSWORD" ]]) {
                container('docker') { sh """
                    docker login -u ${DOCKER_USER} -p ${DOCKER_PASSWORD}
                    docker build -t cnych/kubevela-gitops-demo:${env.BUILD_ID} .
                    docker push cnych/kubevela-gitops-demo:${env.BUILD_ID}
                """ }
            } }
        }
    }
    post { success { setBuildStatus("Deploy success", "SUCCESS") } failure { setBuildStatus("Deploy failed", "FAILURE") } }
}
</code>

After the image is pushed, KubeVela detects the new tag and updates the GitOps repo. A Git secret containing credentials is required for KubeVela to commit back to the repo:

<code>apiVersion: v1
kind: Secret
metadata:
  name: git-secret
type: kubernetes.io/basic-auth
stringData:
  username: <your username>
  password: <your password></code>

The

clusters/apps.yaml

file is modified to watch both the

apps/

directory and the image repository, using an image policy to select the newest tag:

<code># ... omitted other fields
imageRepository:
  # image address
  image: <your image>
  secretRef: dockerhub-secret
  filterTags:
    pattern: "^main-[a-f0-9]+-(?P<ts>[0-9]+)"
    extract: "$ts"
  policy:
    numerical:
      order: asc
  commitMessage: "Image: {{range .Updated.Images}}{{println .}}{{end}}"</code>

In

apps/my-app.yaml

the image field is annotated so KubeVela can replace it automatically:

<code>spec:
  components:
  - name: my-server
    type: webservice
    properties:
      image: cnych/kubevela-gitops-demo:main-9e8d2465-1697703645 # {"$imagepolicy": "default:apps"}</code>

Applying the updated

clusters/apps.yaml

triggers the deployment:

<code>$ kubectl apply -f clusters/apps.yaml
$ vela ls
APP   COMPONENT   TYPE       TRAITS          PHASE          HEALTHY   STATUS   CREATED‑TIME
apps  apps        kustomize                running        healthy   …
my-app my-server  webservice scaler,gateway runningWorkflow unhealthy Ready:0/1 …
$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
my-server-…              1/1     Running   0          2m</code>

Verification via Ingress shows the updated version and database content:

<code>$ curl -H "Host:vela-gitops-demo.k8s.local" http://<ingress‑ip>
Version: 0.1.8
$ curl -H "Host:vela-gitops-demo.k8s.local" http://<ingress‑ip>/db
User: KubeVela
Description: It's a test user</code>

After modifying the source code (e.g., changing

VERSION

to

0.2.0

and updating database rows) and pushing the change, the CI pipeline builds a new image, KubeVela detects the new tag, updates the GitOps repo, and the application is automatically upgraded to the new version.

Thus, platform operators can update infrastructure or application configs by editing the Git repo, while developers trigger automatic deployments by pushing code changes, achieving a fully automated GitOps workflow.

Conclusion

For operations, updating infrastructure or application settings only requires modifying files in the configuration repository; KubeVela syncs them to the cluster, simplifying deployment. For developers, code changes automatically update the image in the configuration repo, enabling seamless version upgrades. Although KubeVela relies on FluxCD, it extends functionality to manage hybrid multi‑cloud resources (e.g., RDS) through unified Application objects, providing a single‑pane‑of‑glass experience. The

vela top

command offers an overview and health diagnostics for all managed applications.

Reference: https://kubevela.io/zh/docs/end-user/gitops/fluxcd/
CI/CDKubernetesmulti-cloudGitOpsKubeVelaInfrastructure as Code
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.