Build an Enterprise CI/CD Platform with Jenkins, Docker, and Kubernetes
This tutorial walks through using the Ruoyi project to set up a full CI/CD pipeline with Jenkins, custom Docker base images, Kubernetes deployments, and automated notifications, covering environment preparation, pipeline design, Dockerfile creation, Jenkins configuration, GitLab webhooks, and deployment verification across DEV, UAT, and PROD stages.
Case Introduction
This case uses the Ruoyi project as an example to build an enterprise‑level CI/CD platform with Jenkins.
Ruoyi service list:
ruoyi-auth
ruoyi-system
ruoyi-gateway
ruoyi-ui
Ruoyi environment list:
DEV
UAT
PROD
Environment preparation:
nacos installed and configured
MySQL deployed and initialized
Redis deployed
Harbor image repository
Gitlab deployed
Kubernetes deployed
Ingress deployed
Design Idea
Trigger design: the Jenkins Generic Webhook Trigger plugin automatically starts the pipeline when a GitLab webhook fires.
Process description:
After developers finish code, they merge and create a Git tag.
Gitlab webhook triggers the Jenkins pipeline.
Pipeline design diagrams:
Custom Base Images
In real enterprise environments, base images are defined to meet specific needs.
Define Maven image:
Used for building and packaging code; it embeds Ruoyi dependencies into the base image to avoid layer‑build failures.
# 拉取源代码并切换分支
$ https://gitee.com/y_project/RuoYi-Cloud.git
$ git checkout v3.6.3
$ cd ..
# 定义Dockerfile
cat Dockerfile
FROM maven:3.8.6-openjdk-8
ADD RuoYi-Cloud /opt/RuoYi-Cloud
RUN cd /opt/RuoYi-Cloud && mvn clean install -DskipTests
RUN rm -rf /opt/RuoYi-Cloud
# 构建镜像
docker build uhub.service.ucloud.cn/kubesre/maven:jdk8 .
docker push uhub.service.ucloud.cn/kubesre/maven:jdk8Define Java base image:
Creates a flexible base image whose configuration is driven by environment variables.
# 创建个目录
mkdir base && cd base
# 创建启动脚本
cat docker-entrypoint.sh
#!/bin/sh
java -server -Xms$JVM_XMS -Xmx$JVM_XMX -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/data/gc.log -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/heapdump.hprof -jar app.jar --server.port=$SERVICE_PORT --spring.profiles.active=$PROFILES_ACTIVE --spring.cloud.nacos.config.server-addr=$NACOS_ADDRESS --spring.cloud.nacos.config.namespace=$NACOS_NAMESPACE_ID --spring.cloud.nacos.config.username=$NACOS_USERNAME --spring.cloud.nacos.config.password=$NACOS_PASSWORD --spring.cloud.nacos.discovery.server-addr=$NACOS_ADDRESS --spring.cloud.nacos.discovery.namespace=$NACOS_NAMESPACE_ID --spring.cloud.nacos.discovery.username=$NACOS_USERNAME --spring.cloud.nacos.discovery.password=$NACOS_PASSWORD
# 创建down-nacos脚本
cat down-nacos.sh
#!/bin/sh
ipTrue=false
java_service_ip=""
code=false
getPodIp() {
java_service_ip=`ip a | grep inet | grep -v inet6 | grep -v '127.0.0.1' | awk '{print $2}' | awk -F / '{print$1}'`
grep -w "${java_service_ip}" /etc/hosts > /dev/null
if [ $? -eq 0 ]; then
echo "get java service ip success"
ipTrue=true
else
echo "get java service ip failed"
fi
}
downService(){
accessToken=`curl -s -X POST http://$NACOS_ADDRESS/nacos/v1/auth/users/login --form username=$NACOS_USERNAME --form password=$NACOS_PASSWORD|jq -r .accessToken`
curl -s -X PUT "$NACOS_ADDRESS/nacos/v1/ns/instance?language=zh-CN&accessToken=$accessToken&username=$NACOS_USERNAME&serviceName=$JAVA_SERVICE_NAME&ip=$java_service_ip&port=$SERVICE_PORT&enabled=false&namespaceId=$NACOS_NAMESPACE_ID"
if [ "$code" = "ok" ]; then
echo "java service down from nacos success"
code=true
else
echo "java service down from nacos failed"
fi
}
start(){
getPodIp
if $ipTrue; then
downService
sleep 30
else
echo "down $JAVA_SERVICE_NAME failed" >> down_service.log
fi
}
startVariable explanations:
JVM_XMS: minimum JVM heap memory
JVM_XMX: maximum JVM heap memory
SERVICE_PORT: application service port
NACOS_ADDRESS: Nacos address
NACOS_USERNAME: Nacos username
NACOS_PASSWORD: Nacos password
NACOS_NAMESPACE_ID: Nacos namespace ID
PROFILES_ACTIVE: environment name
Dockerfile Writing
Benefits of layered builds: no reliance on the local environment and reduced image size.
Java Dockerfile (layered build):
FROM uhub.service.ucloud.cn/kubesre/maven:jdk8 AS build
COPY src /opt/src/
COPY pom.xml /opt/
RUN cd /opt/ && mvn clean install -DskipTests
FROM uhub.service.ucloud.cn/kubesre/java-base:v8
# copy jar file
COPY --from=build /opt/target/*.jar /data/app.jarVue Dockerfile (layered build):
FROM node:16 AS builder
WORKDIR /usr/src/app
COPY . .
RUN npm install --registry=https://registry.npmmirror.com
RUN npm run build:prod
FROM nginx:stable-alpine
WORKDIR /home/ruoyi/projects/ruoyi-ui
COPY --from=builder /usr/src/app/dist /home/ruoyi/projects/ruoyi-ui
COPY ./nginx/conf/nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Pipeline Writing
All pipeline files need to be customized (credentialsId, robot, image repository). The article shows how to locate the credentialsId and robot IDs in Jenkins.
Java Pipeline (excerpt):
pipeline {
agent any
triggers {
GenericTrigger(
genericVariables: [
[key: 'ref', value: '$.ref'],
[key: 'user_username', value: '$.user_username'],
[key: 'GitRepository', value: '$.project.git_http_url'],
[key: 'project', value: '$.project.name'],
[key: 'repository', value: '$.repository.name']
],
token: "$JOB_NAME",
causeString: 'Triggered on $branch',
printContributedVariables: true,
printPostContent: true,
silentResponse: false
)
}
environment {
pipeline_dir="/var/lib/jenkins/workspace/pipeline"
Tag=sh(script: 'echo "${ref}" | awk -F"/" \'{print $3}\'', returnStdout: true).trim()
Project_Name="${project}"
// other variables omitted for brevity
}
options { buildDiscarder(logRotator(numToKeepStr: '10')) }
stages {
stage('Pull Code') {
steps {
checkout([$class: 'GitSCM', branches: [[name: "$ref"]], userRemoteConfigs: [[credentialsId: 'ac66550d-6999-485c-af3a-7e6189f765f0', url: "$GitRepository"]]])
script { currentBuild.displayName = "#${BUILD_NUMBER} - ${Project_Name} - ${Tag}" }
}
}
// Build Image, DeployDev, DeployUat, DeployGray, DeployProd, RollBack stages follow the same pattern
}
}Vue Pipeline follows the same structure with its own project name and image repository.
Configure Jenkins
Required plugins (install manually): Generic Webhook Trigger, Pipeline, Build User Vars, Blue Ocean, Lark Notice (uploaded as a .hpi file). The Lark Notice plugin sends build notifications to Feishu groups, supporting multiple notification moments and message types.
Configure Java Pipeline
Create a new Jenkins job, select "Pipeline", set the SCM to the repository containing the Jenkinsfile, adjust the script path, and run the job once to apply the configuration.
Configure Vue Pipeline
Same steps as the Java pipeline, but point to the Vue project’s Jenkinsfile.
Configure Gitlab Webhook
In the Gitlab project, add a webhook with the URL pointing to Jenkins, set the token to the Jenkins job name, and enable the "Tag push events" trigger. This ensures that only tag pushes start the pipeline.
Trigger Verification
Create a tag in Gitlab; the webhook triggers the corresponding Jenkins pipeline. Jenkins prompts for approval before deploying to DEV, UAT, Gray, and PROD environments. Users can also roll back to the previous version. Screenshots illustrate each approval step.
Build Notification
An example of a successful build notification sent to Feishu is shown.
Ops Development Stories
Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.