Capo Project: Cloud‑Native Network Coordination Service – Deployment, Configuration, Testing, and CI/CD Guide
This article provides a comprehensive guide to the open‑source Capo cloud‑native network coordination service, covering its architecture, three deployment methods (Helm, Kustomize, plain YAML), detailed configuration parameters, observability setup, static code analysis with golangci‑lint, extensive unit and e2e testing using Kind, Helm chart packaging, registry publishing, and a full GitHub Actions CI/CD workflow.
Capo is a cloud‑native network coordination service developed by the Information Management Department's architecture team, now open‑sourced on GitHub.
Deployment
Three deployment options are provided, with Helm being the recommended method.
Helm: Uses a chart package for easy install, upgrade, rollback, and dependency management.
Kustomize: Layer‑based customization built into kubectl since v1.14.
Plain YAML: Native Kubernetes manifests.
Helm
Refer to the capo-helm-chart page for configuration. The values.yaml file includes parameters such as replicaCount , leader election, health probe address, webhook port, metrics address, and IP reservation settings.
# -- Number of instances, high availability configuration Please set it to 3
replicaCount: 3
# -- Set capo config
config:
# -- enable leaderElect
leaderElectionEnable: true
# -- health probe bind address
healthProbeBindAddress: ":8081"
# -- webhook port
webhookPort: 9443
# -- metrics bind address
metricsBindAddress: ":8080"
# -- ip reserve max count
ipReserveMaxCount: 300
# -- ip reserve max time
ipReserveTime: 40m
# -- ip release period
ipReleasePeriod: 5sKey configuration notes:
replicaCount defaults to 3 for high availability.
config.leaderElectionEnable defaults to true , requiring replicaCount > 1.
config.ipReserveMaxCount is recommended to be set to 1.2 * maxPods to keep pod IPs after node failure.
Installation
Add the Helm repository and install the chart:
# helm repo add xdfgithubrepo https://xdfdotcn.github.io/capo
# helm search repo xdfgithubrepo
# helm install capo -n ip-reserve xdfgithubrepo/capo --create-namespaceCheck the deployment status:
# kubectl get pods -n ip-reserve -o wide | grep capo
capo-78b6899d4d-dxwfh 2/2 Running 0 10s 10.12.1.2 master01Kustomize
Deploy from the config/default directory:
# cd config/default
# kustomize build | kubectl apply -f -Uninstall with:
# kustomize build | kubectl delete -f -Plain YAML
Deploy using the manifests under deploy/yaml :
# cd deploy/yaml
# kubectl apply -f install.yamlUninstall with:
# kubectl delete -f install.yamlObservability
The deployment creates a ServiceMonitor for Prometheus and provides a Grafana dashboard to visualize IP reservation and release metrics.
Static Code Analysis
Go static analysis is performed with golangci-lint , which aggregates many linters such as asasalint , asciicheck , bidichk , bodyclose , errcheck , gofmt , goimports , govet , misspell , and others.
# https://golangci-lint.run/usage/linters/
linters:
enable:
- asasalint
- asciicheck
- bidichk
- bodyclose
- errcheck
- exportloopref
- gofmt
- goimports
- gosimple
- govet
- ineffassign
- misspell
- noctx
- nosprintfhostport
- unconvert
- unused
- wastedassign
- whitespace
run:
timeout: 10m
skip-files:
- ".+_test.go"
- ".+_test_.+.go"Run the linter:
# golangci-lint run ./... -vTesting and Coverage
Unit tests are run with:
# make test
# go test -v ./... -coverpkg="./..." -covermode=atomic -coverprofile=coverage.outView coverage in HTML:
# go tool cover -html=./coverage.outCoverage reports are uploaded to Codecov via GitHub Actions.
End‑to‑End Tests
E2E tests use the e2e-framework against a Kind cluster. Example Kind configuration:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
serviceSubnet: "11.0.0.0/16"
podSubnet: "11.244.0.0/16"
kubeProxyMode: "ipvs"
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["http://hub-mirror.c.163.com"]
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
imageRepository: registry.aliyuncs.com/google_containers
- |
kind: KubeletConfiguration
nodeRegistration:
kubeletExtraArgs:
pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
pod-infra-container-image: registry.aliyuncs.com/google_containers/pause:3.1Validate webhook configuration example (certificate generation omitted):
apiVersion: v1
kind: Namespace
metadata:
name: ip-reserve
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: ip-reserve-validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
caBundle: xxx
service:
name: capo-webhook-service
namespace: ip-reserve
path: /pod-ip-reservation
failurePolicy: Fail
name: pod.ip.io
namespaceSelector:
matchExpressions:
- key: ip-reserve
operator: In
values:
- enabled
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- DELETE
resources:
- pods
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
resources:
- pods/eviction
scope: '*'
sideEffects: NoneUtility function to obtain the machine's outbound IP:
// Get preferred outbound ip of this machine
func GetOutboundIP() net.IP {
conn, err := net.Dial("udp", "8.8.8.8:80")
if err != nil {
log.Fatal(err)
}
defer conn.Close()
localAddr := conn.LocalAddr().(*net.UDPAddr)
return localAddr.IP
}Helm Chart Packaging
Package the chart with version variables and push to a registry (Harbor or GitHub Pages):
# export APP_VERSION="v1.0.1"
# export CHART_VERSION="0.1.0"
# export IMAGE="harbor-xadd.staff.xdf.cn/cloudnative/ip-reserve-delay-release:${APP_VERSION}"
# helm package capo --version ${CHART_VERSION} --app-version ${APP_VERSION}Push to Harbor using the helm-push plugin:
# helm plugin install https://github.com/chartmuseum/helm-push
# helm repo add --username=admin --password=Xadd12345 xdfccrepo https://harbor-xadd.test.xdf.cn/chartrepo/cloudnative
# helm cm-push capo-${CHART_VERSION}.tgz xdfccrepoOr publish via GitHub Pages on the gh-pages branch.
GitHub Actions CI/CD Workflow
The workflow runs on pushes to master / main and PRs, performing linting, unit and e2e tests on a Kind cluster, uploading coverage to Codecov, and finally building the binary.
# .github/workflows/go.yml
name: Go
on:
push:
branches: ["master", "main"]
pull_request:
branches: ["master"]
jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.17
- uses: actions/checkout@v3
- run: make lint
test:
runs-on: ubuntu-latest
steps:
- uses: actions/setup-go@v3
with:
go-version: 1.17
- uses: actions/checkout@v3
- name: Create k8s Kind Cluster
uses: helm/[email protected]
with:
version: v0.12.0
config: ./test/e2e/kind-config.yaml
node_image: "kindest/node:v1.23.0"
cluster_name: "my-cluster-b3d07"
- run: make test
- uses: codecov/codecov-action@v3
with:
file: ./coverage.out
fail_ci_if_error: true
verbose: true
build:
runs-on: ubuntu-latest
needs: [lint, test]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-go@v3
with:
go-version: 1.17
- run: make buildThe CI logs show successful lint, test, and build stages, with coverage above 80% for the pkg package.
Development Roadmap
Support hot‑update of configuration without restart.
Add Etcd backend for Calico.
Automatic IP release on cluster deletion.
Label/annotation based IP reservation for specific workloads.
Fixed IP support for Calico and Cilium.
Handle kubelet‑driven evictions where pods are killed directly.
Open‑Source Insights
Key practices for a successful open‑source project include writing clean code, thorough testing, comprehensive documentation, proper releases, effective promotion, responsive feedback handling, and continuous iteration.
For more details, visit the project repository: https://github.com/xdfdotcn/capo .
New Oriental Technology
Practical internet development experience, tech sharing, knowledge consolidation, and forward-thinking insights.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.