Cloud Native 15 min read

Cross‑Account Multi‑Cluster Traffic Unified Access and Governance on Tencent Cloud

By registering foreign clusters to the Tencent Distributed Cloud Center and linking them with Cloud Connect Network, the guide shows how to use Tencent Cloud Service Mesh to create a unified ingress gateway, configure DNS and Istio virtual services, and route traffic across multiple accounts and clusters securely.

Tencent Cloud Developer
Tencent Cloud Developer
Tencent Cloud Developer
Cross‑Account Multi‑Cluster Traffic Unified Access and Governance on Tencent Cloud

This article, originally published in the Tencent Cloud Developer Community’s "Tech Thinking & Sharing" column, explains how to achieve unified traffic access and governance for services deployed across multiple Tencent Cloud accounts using Tencent Cloud Service Mesh (TCM), Cloud Connect Network (CCN) and the Tencent Distributed Cloud Center (TDCC).

Requirement Scenario : Services are deployed in different Tencent Cloud accounts. The goal is to route all traffic through a single account while still being able to forward requests to services that reside in other accounts.

Requirement Analysis : Multi‑cluster cross‑VPC traffic can be managed with TCM + CCN, but TCM cannot directly manage clusters that belong to other accounts. The solution is to register the foreign clusters to TDCC (currently in internal testing) and then let TCM discover those clusters via TDCC.

Key Points and Precautions :

When using managed clusters (TKE/EKS), the apiserver IP is internal (169.x.x.x) and cannot be reached by TCM in another account. Use independent TKE clusters for cross‑account registration.

If sidecar injection is not required, managed clusters can be used.

Step‑by‑Step Operations :

1. Prepare Clusters

In account A (the traffic‑receiving account), create one or more TKE/EKS clusters. In the other accounts, create independent TKE clusters. Ensure that the CIDR blocks of all clusters do not overlap.

2. Connect Networks with Cloud Connect Network (CCN)

In account A, create a CCN and associate the VPCs that need to be connected. In the other accounts, go to the VPC console, select the VPC to be connected, and request to join the CCN created by account A. After the other accounts submit the request, approve it in account A.

3. Open TDCC

Log in to account A, open the TDCC console and follow the wizard to enable TDCC. Choose the region, VPC and subnet for TDCC. It is recommended to place TDCC in the same region as the services, or as close as possible to reduce control‑plane latency.

After the TDCC hub cluster is created, go to the TDCC cluster list and click Register Existing Cluster . Choose Non‑TKE Cluster (otherwise only clusters in the same account can be selected) and select the region of the foreign cluster. Click Generate Registration Command and download the generated agent.yaml file.

Apply the registration yaml in the foreign cluster:

kubectl apply -f agent.yaml

After registration, the cluster status becomes Running in the TDCC console.

4. Create Service Mesh

In account A, open the TCM console and create a new mesh (prefer the highest Istio version and choose a managed mesh). During mesh creation, associate the clusters that have been registered to TDCC.

Example of creating an Istio Gateway (yaml):

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: cluster
  namespace: istio-system
spec:
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: HTTP-80
      protocol: HTTP
    hosts:
    - "*.imroc.cc"

5. Create Ingress Gateway

In the mesh’s basic information page, create an Ingress Gateway, select the CLB (or create a new one) and bind it to the mesh. The CLB IP will be used for DNS resolution.

6. Configure DNS

Resolve the three service domain names (e.g., cluster1.imroc.cc , cluster2.imroc.cc , cluster3.imroc.cc ) to the Ingress Gateway’s CLB IP.

7. Deploy Test Services

Deploy three mock services (using stoplight/prism ) in three different clusters. Example yaml for cluster1 :

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster1-conf
  namespace: test
data:
  mock.yaml: |
    openapi: 3.0.3
    info:
      title: MockServer
      version: 1.0.0
    paths:
      '/':
        get:
          responses:
            '200':
              content:
                'text/plain':
                  schema:
                    type: string
                    example: cluster1
---
apiVersion: v1
kind: Service
metadata:
  name: cluster1
  namespace: test
  labels:
    app: cluster1
spec:
  type: ClusterIP
  ports:
  - port: 80
    name: http
    protocol: TCP
    targetPort: 80
  selector:
    app: cluster1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster1
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster1
      version: v1
  template:
    metadata:
      labels:
        app: cluster1
        version: v1
    spec:
      containers:
      - name: cluster1
        image: stoplight/prism:4
        args:
        - mock
        - -h
        - 0.0.0.0
        - -p
        - "80"
        - /etc/prism/mock.yaml
        volumeMounts:
        - mountPath: /etc/prism
          name: config
      volumes:
      - name: config
        configMap:
          name: cluster1-conf

Apply the yaml in each cluster (replace cluster1.yaml with cluster2.yaml and cluster3.yaml accordingly):

kubectl create ns test
kubectl apply -f cluster1.yaml

8. Configure VirtualService Rules

Create a VirtualService for each service and bind it to the previously created Gateway:

apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: cluster1-imroc-cc
  namespace: test
spec:
  gateways:
  - istio-system/cluster
  hosts:
  - 'cluster1.imroc.cc'
  http:
  - route:
    - destination:
        host: cluster1.test.svc.cluster.local
        port:
          number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: cluster2-imroc-cc
  namespace: test
spec:
  gateways:
  - istio-system/cluster
  hosts:
  - 'cluster2.imroc.cc'
  http:
  - route:
    - destination:
        host: cluster2.test.svc.cluster.local
        port:
          number: 80
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: cluster3-imroc-cc
  namespace: test
spec:
  gateways:
  - istio-system/cluster
  hosts:
  - 'cluster3.imroc.cc'
  http:
  - route:
    - destination:
        host: cluster3.test.svc.cluster.local
        port:
          number: 80

9. Test the Result

Use curl to request each domain name. The response should be cluster1 , cluster2 or cluster3 respectively, confirming that traffic is correctly routed across accounts and clusters.

In summary, the article demonstrates a practical solution for cross‑account, multi‑cluster traffic unification on Tencent Cloud by combining TCM, CCN and TDCC. The same approach can be extended with more advanced Istio routing rules (path‑based routing, traffic splitting, etc.) to achieve finer‑grained traffic governance.

Kubernetesservice meshtraffic managementTencent CloudCross-AccountTCMTDCC
Tencent Cloud Developer
Written by

Tencent Cloud Developer

Official Tencent Cloud community account that brings together developers, shares practical tech insights, and fosters an influential tech exchange community.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.