Argo Workflows vs Jenkins: Building Cloud‑Native CI/CD Pipelines on ACK One Serverless
Argo Workflows, a cloud‑native Kubernetes job orchestrator, offers superior autoscaling, concurrency, cost efficiency, and seamless integration with the Argo ecosystem compared to Jenkins, and the article provides a detailed tutorial on deploying a Go‑based CI pipeline on ACK One Serverless Argo with BuildKit, NAS caching, and parameterized workflow templates.
Argo Workflows is an open‑source, cloud‑native workflow engine for orchestrating jobs on Kubernetes, enabling automation of complex pipelines such as scheduled tasks, machine learning, ETL, model training, data streams, and CI/CD.
Kubernetes Jobs lack step dependencies, templates, visual UI, and workflow‑level error handling, making them unsuitable for batch processing, scientific computing, and continuous integration scenarios.
As a CNCF graduated project, Argo Workflows is widely used, especially for continuous integration (CI).
Jenkins, the most common CI/CD solution, is free and plugin‑rich but suffers from being non‑Kubernetes‑native, performance bottlenecks as pipelines grow, limited auto‑scaling, high idle‑resource costs, and maintenance challenges due to plugin version incompatibilities and security issues.
Compared with Jenkins, Argo Workflows provides many advantages: it is Kubernetes‑native, inherits Kubernetes container management benefits (automatic recovery, elastic scaling, RBAC with SSO), offers autoscaling and high concurrency for large‑scale pipelines, minimizes cost through spot ECI usage, and integrates seamlessly with the Argo ecosystem (Argo CD, Argo Rollout, Argo Event).
Comparison Table
Argo Workflows
Jenkins
Kubernetes‑native, inherits container fault‑recovery, elastic scaling, RBAC + SSO.
Non‑Kubernetes‑native.
Autoscaling, high concurrency for large pipelines.
Performance degrades with many pipelines; poor auto‑scaling.
Cost‑effective: auto‑scale, spot ECI support.
Idle compute waste.
Growing community, tight integration with Argo CD, Rollout, Event.
Mature community, abundant plugins but high maintenance overhead.
The article then introduces ACK One Serverless Argo , a fully managed Argo Workflows service that leverages Alibaba Cloud ECI for automatic scaling, spot instances, and cost reduction.
CI Pipeline Overview
The pipeline uses BuildKit for image building and caching, NAS for Go module caching, and consists of three main steps: Git clone & checkout, optional Go test (accelerated by NAS), and Build & push image with optional commit‑ID tag.
The pre‑installed ClusterWorkflowTemplate named ci-go-v1 is provided. Its YAML definition is shown below:
apiVersion: argoproj.io/v1alpha1
kind: ClusterWorkflowTemplate
metadata:
name: ci-go-v1
spec:
entrypoint: main
volumes:
- name: run-test
emptyDir: {}
- name: workdir
persistentVolumeClaim:
claimName: pvc-nas
- name: docker-config
secret:
secretName: docker-config
arguments:
parameters:
- name: repo_url
value: ""
- name: repo_name
value: ""
- name: target_branch
value: "main"
- name: container_image
value: ""
- name: container_tag
value: "v1.0.0"
- name: dockerfile
value: "./Dockerfile"
- name: enable_suffix_commitid
value: "true"
- name: enable_test
value: "true"
templates:
- name: main
dag:
tasks:
- name: git-checkout-pr
inline:
container:
image: alpine:latest
command:
- sh
- -c
- |
set -eu
apk --update add git
cd /workdir
echo "Start to Clone "{{workflow.parameters.repo_url}}
git -C "{{workflow.parameters.repo_name}}" pull || git clone {{workflow.parameters.repo_url}}
cd {{workflow.parameters.repo_name}}
echo "Start to Checkout target branch" {{workflow.parameters.target_branch}}
git checkout {{workflow.parameters.target_branch}}
echo "Get commit id"
git rev-parse --short origin/{{workflow.parameters.target_branch}} > /workdir/{{workflow.parameters.repo_name}}-commitid.txt
commitId=$(cat /workdir/{{workflow.parameters.repo_name}}-commitid.txt)
echo "Commit id is got: "$commitId
echo "Git Clone and Checkout Complete."
volumeMounts:
- name: "workdir"
mountPath: /workdir
resources:
requests:
memory: 1Gi
cpu: 1
activeDeadlineSeconds: 1200
- name: run-test
when: "{{workflow.parameters.enable_test}} == true"
inline:
container:
image: golang:1.22-alpine
command:
- sh
- -c
- |
set -eu
if [ ! -d "/workdir/pkg/mod" ]; then
mkdir -p /workdir/pkg/mod
echo "GOMODCACHE Directory /pkg/mod is created"
fi
export GOMODCACHE=/workdir/pkg/mod
cp -R /workdir/{{workflow.parameters.repo_name}} /test/{{workflow.parameters.repo_name}}
echo "Start Go Test..."
cd /test/{{workflow.parameters.repo_name}}
go test -v ./...
echo "Go Test Complete."
volumeMounts:
- name: "workdir"
mountPath: /workdir
- name: run-test
mountPath: /test
resources:
requests:
memory: 4Gi
cpu: 2
activeDeadlineSeconds: 1200
depends: git-checkout-pr
- name: build-push-image
inline:
container:
image: moby/buildkit:v0.13.0-rootless
command:
- sh
- -c
- |
set -eu
tag={{workflow.parameters.container_tag}}
if [ {{workflow.parameters.enable_suffix_commitid}} == "true" ]
then
commitId=$(cat /workdir/{{workflow.parameters.repo_name}}-commitid.txt)
tag={{workflow.parameters.container_tag}}-$commitId
fi
echo "Image Tag is: "$tag
echo "Start to Build And Push Container Image"
cd /workdir/{{workflow.parameters.repo_name}}
buildctl-daemonless.sh build \
--frontend \
dockerfile.v0 \
--local context=. \
--local dockerfile=. \
--opt filename={{workflow.parameters.dockerfile}} \
build-arg:GOPROXY=http://goproxy.cn,direct \
--output type=image,"name={{workflow.parameters.container_image}}:${tag},{{workflow.parameters.container_image}}:latest",push=true,registry.insecure=true \
--export-cache mode=max,type=registry,ref={{workflow.parameters.container_image}}:buildcache \
--import-cache type=registry,ref={{workflow.parameters.container_image}}:buildcache
echo "Build And Push Container Image {{workflow.parameters.container_image}}:${tag} and {{workflow.parameters.container_image}}:latest Complete."
env:
- name: BUILDKITD_FLAGS
value: --oci-worker-no-process-sandbox
- name: DOCKER_CONFIG
value: /.docker
volumeMounts:
- name: workdir
mountPath: /workdir
- name: docker-config
mountPath: /.docker
securityContext:
seccompProfile:
type: Unconfined
runAsUser: 1000
runAsGroup: 1000
resources:
requests:
memory: 4Gi
cpu: 2
activeDeadlineSeconds: 1200
depends: run-testThe article then outlines the steps to run the pipeline via the ACK One console: log in, enable the Argo workflow console, navigate to Cluster Workflow Templates , select ci-go-v1 , fill in parameters, and submit.
A parameter table lists the required inputs such as repo_url , repo_name , target_branch , container_image , container_tag , dockerfile , enable_suffix_commitid , and enable_test .
After execution, the workflow status can be inspected in the Argo UI.
Conclusion
ACK One Serverless Argo, as a fully managed service, enables large‑scale, fast, and cost‑effective CI pipelines and can be combined with ACK One GitOps (Argo CD) and Argo Event for a complete automated CI/CD solution.
Alibaba Cloud Infrastructure
For uninterrupted computing services
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.