Cloud Native 49 min read

Mastering Kubernetes Core Resource Management: From CLI to Declarative & GUI

This guide walks through three fundamental methods for managing Kubernetes core resources—imperative CLI commands, declarative manifest files, and GUI dashboards—covering namespaces, deployments, services, and addons like Flannel, CoreDNS, Traefik, and the Kubernetes dashboard, with detailed commands, configuration examples, and troubleshooting tips.

Ops Development Stories
Ops Development Stories
Ops Development Stories
Mastering Kubernetes Core Resource Management: From CLI to Declarative & GUI

1. Kubernetes Core Resource Management Methods

1.1 Imperative (CLI) Management

Manage core resources using kubectl commands.

Manage Namespace

View namespaces

<code>[root@zdd211-21 ~]# kubectl get namespace
NAME              STATUS   AGE
default           Active   16h
kube-node-lease   Active   16h
kube-public       Active   16h
kube-system       Active   16h
</code>

View resources in a namespace

<code>[root@zdd211-21 ~]# kubectl get all -n default
NAME                 READY   STATUS    RESTARTS   AGE
pod/nginx-ds-8r8sc   1/1     Running   0          128m
pod/nginx-ds-pdznf   1/1     Running   0          128m

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   192.168.0.1   <none>        443/TCP   16h

NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/nginx-ds   2         2         2       2            2           <none>          128m
</code>

Create a namespace

<code>[root@zdd211-21 ~]# kubectl create ns app
namespace/app created
[root@zdd211-21 ~]# kubectl get ns
NAME              STATUS   AGE
app               Active   12s
default           Active   16h
kube-node-lease   Active   16h
kube-public       Active   16h
kube-system       Active   16h
</code>

Delete a namespace

<code>[root@zdd211-21 ~]# kubectl delete namespace app
namespace "app" deleted
[root@zdd211-21 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   16h
kube-node-lease   Active   16h
kube-public       Active   16h
kube-system       Active   16h
</code>

Manage Deployment Resources

Create a deployment

Create a deployment named nginx-dp in the

kube-public

namespace using the nginx image.

<code>[root@zdd211-21 ~]# kubectl create deployment nginx-dp --image=harbor.od.com/public/nginx:v1.7.9 -n kube-public
deployment.apps/nginx-dp created
</code>

View the deployment

<code>[root@zdd211-21 ~]# kubectl get deployment -n kube-public
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nginx-dp   1/1     1            1           48s
</code>
Simple view
<code>[root@zdd211-21 ~]# kubectl get pods -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-2m8hc   1/1     Running   0          2m24s
</code>
Extended view
<code>[root@zdd211-21 ~]# kubectl get pods -n kube-public -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
nginx-dp-5dfc689474-2m8hc   1/1     Running   0          3m52s   172.7.21.3   zdd211-21.host.com   <none>           <none>
</code>
Detailed view
<code>[root@zdd211-21 ~]# kubectl describe deployment nginx-dp -n kube-public
Name:                   nginx-dp
Namespace:              kube-public
CreationTimestamp:    Sun, 01 Mar 2020 17:41:06 +0800
Labels:                app=nginx-dp
Annotations:           deployment.kubernetes.io/revision: 1
Selector:              app=nginx-dp
Replicas:              1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:          RollingUpdate
MinReadySeconds:       0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
...</code>

View Pod Resources

<code>[root@zdd211-21 ~]# kubectl get pods -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-2m8hc   1/1     Running   0          15m
</code>

Enter a Pod

<code>[root@zdd211-21 ~]# kubectl exec -it nginx-dp-5dfc689474-2m8hc -n kube-public bash
root@nginx-dp-5dfc689474-2m8hc:/# ip addr show
...</code>

Delete (Restart) a Pod

Deleting a pod forces the controller to recreate it.

<code>[root@zdd211-21 ~]# kubectl delete pods nginx-dp-5dfc689474-2m8hc -n kube-public
pod "nginx-dp-5dfc689474-2m8hc" deleted
[root@zdd211-21 ~]# kubectl get pods -n kube-public
NAME                        READY   STATUS    RESTARTS   AGE
nginx-dp-5dfc689474-qk5j2   1/1     Running   0          7s
</code>

Force delete:

<code>[root@zdd211-21 ~]# kubectl delete pods nginx-dp-5dfc689474-qk5j2 -n kube-public --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "nginx-dp-5dfc689474-qk5j2" force deleted
</code>

Delete a Deployment

<code>[root@zdd211-21 ~]# kubectl delete deployment nginx-dp -n kube-public
deployment.extensions "nginx-dp" deleted
[root@zdd211-21 ~]# kubectl get pods -n kube-public
No resources found.
</code>

Manage Service Resources

Create a service (expose deployment)

<code>[root@zdd211-21 ~]# kubectl expose deployment nginx-dp --port=80 -n kube-public
service/nginx-dp exposed
</code>

View the service

<code>[root@zdd211-21 ~]# kubectl get svc -n kube-public
NAME       TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
nginx-dp   ClusterIP   192.168.197.188   <none>        80/TCP    62s
</code>

The ClusterIP remains constant even if pods move between nodes.

Kubectl Summary

kubectl is the official CLI tool that communicates with the API server, translating user commands into API calls to manage Kubernetes resources.

<code>kubectl --help
kubectl controls the Kubernetes cluster manager.

Basic Commands (Beginner):
  create   Create a resource from a file or from stdin.
  expose   Expose a resource as a new Service.
  run      Run a particular image on the cluster
  set      Set specific features on objects

Basic Commands (Intermediate):
  explain  Documentation of resources
  get      Display one or many resources
  edit     Edit a resource on the server
  delete   Delete resources

Deploy Commands:
  rollout  Manage the rollout of a resource
  scale    Set a new size for a Deployment, ReplicaSet, etc.
  autoscale  Auto‑scale a Deployment, ReplicaSet, or ReplicationController
</code>

1.2 Declarative (Manifest) Management

Manage resources using YAML/JSON manifest files.

View a Manifest

<code>[root@zdd211-21 ~]# kubectl get pod nginx-dp-5dfc689474-gwghg -o yaml -n kube-public
apiVersion: v1
kind: Pod
metadata:
  name: nginx-dp-5dfc689474-gwghg
  namespace: kube-public
  ...
</code>

Explain a Resource

<code>[root@zdd211-21 ~]# kubectl explain service.metadata
KIND:     Service
VERSION:  v1

RESOURCE: metadata <Object>
DESCRIPTION:
  Standard object's metadata. More info:
  https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata

FIELDS:
  annotations   <map[string]string>
    Annotations is an unstructured key value map stored with a resource that may be set by external tools.
  clusterName    <string>
    The name of the cluster which the object belongs to.
  ...
</code>

Create a Manifest

<code># nginx-ds-svn.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-ds
  name: nginx-ds
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-ds
  sessionAffinity: None
  type: ClusterIP
</code>

Apply a Manifest

<code>[root@zdd211-21 ~]# kubectl create -f nginx-ds-svn.yaml
service/nginx-ds created
</code>

Modify a Manifest

Change the port locally and re‑apply.

<code># edit nginx-ds-svn.yaml, change port to 180
[...]
port: 180
[...]
[root@zdd211-21 ~]# kubectl delete -f nginx-ds-svn.yaml
service "nginx-ds" deleted
[root@zdd211-21 ~]# kubectl apply -f nginx-ds-svn.yaml
service/nginx-ds created
</code>

Delete via Manifest

Imperative delete

<code>[root@zdd211-21 ~]# kubectl delete service nginx-dp -n kube-public
service "nginx-dp" deleted
</code>

Declarative delete

<code>[root@zdd211-21 ~]# kubectl delete -f nginx-ds-svn.yaml
service "nginx-ds" deleted
</code>

1.3 GUI Management (Dashboard)

See the dashboard installation section later for details.

2. Kubernetes Core Add‑ons (Plugins)

2.1 Flannel CNI Network Plugin

Cluster planning (two hosts):

zdd211-21.host.com – role: flannel – IP: 10.2211.55.21

zdd211-22.host.com – role: flannel – IP: 10.2211.55.22

Download, extract, and link the binary.

<code>[root@zdd211-21 ~]# cd /opt/src/
[root@zdd211-21 src]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@zdd211-21 src]# mkdir /opt/flannel-v0.11.0
[root@zdd211-21 src]# tar -xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
[root@zdd211-21 src]# ln -sf /opt/flannel-v0.11.0/ /opt/flannel
</code>

Create configuration files.

<code># subnet.env
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.21.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false
</code>
<code># flanneld.sh
#!/bin/sh
./flanneld \
  --public-ip=10.211.55.21 \
  --etcd-endpoints=https://10.211.55.12:2379,https://10.211.55.21:2379,https://10.211.55.22:2379 \
  --etcd-keyfile=./certs/client-key.pem \
  --etcd-certfile=./certs/client.pem \
  --etcd-cafile=./certs/ca.pem \
  --iface=eth0 \
  --subnet-file=./subnet.env \
  --healthz-port=2401
</code>

Configure etcd network entry.

<code>[root@zdd211-21 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'
</code>

Create a supervisor unit to keep flanneld running.

<code>[program:flanneld-211-21]
command=/opt/flannel/flanneld.sh
autostart=true
autorestart=true
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log
</code>

Start the service and verify.

<code>[root@zdd211-21 etcd]# supervisorctl update
flanneld-211-21: added process group
[root@zdd211-21 etcd]# supervisorctl status
flanneld-211-21                  RUNNING   pid 9776, uptime 0:01:51
</code>

Test network connectivity before and after installing Flannel.

<code># Before Flannel
[root@zdd211-21 ~]# ping 172.7.22.3
PING 172.7.22.3 (172.7.22.3) 56(84) bytes of data.
... (no reply)

# After Flannel
[root@zdd211-21 ~]# ping 172.7.22.3
64 bytes from 172.7.22.3: icmp_seq=1 ttl=63 time=0.218 ms
...</code>

2.2 CoreDNS Service‑Discovery Plugin

CoreDNS replaces the older kube‑dns for cluster DNS.

Prepare Images

<code>[root@zdd211-200 ~]# docker pull coredns/coredns:1.6.1
[...]
[root@zdd211-200 ~]# docker tag c0f6e815079e harbor.od.com/public/coredns:v1.6.1
[root@zdd211-200 ~]# docker push harbor.od.com/public/coredns:v1.6.1
</code>

Apply Manifests

<code># rbac.yaml (ServiceAccount, ClusterRole, ClusterRoleBinding)
# cm.yaml (ConfigMap with Corefile)
# dp.yaml (Deployment using the coredns image)
# svc.yaml (ClusterIP Service, IP 192.168.0.2)
</code>

Apply them from any node:

<code>[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/cm.yaml
configmap/coredns created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/dp.yaml
deployment.apps/coredns created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yaml
service/coredns created
</code>

Verify the deployment:

<code>[root@zdd211-21 ~]# kubectl get all -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-6b6c4f9648-5sclg   1/1     Running   0          38s

NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                  AGE
service/coredns   ClusterIP   192.168.0.2   <none>        53/UDP,53/TCP,9153/TCP   30s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/coredns   1/1     1            1           38s
</code>

Test DNS resolution:

<code>[root@zdd211-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short
www.a.shifen.com.
61.135.169.121
61.135.169.125
</code>

2.3 Traefik Ingress Controller

Prepare the Traefik image.

<code>[root@zdd211-200 ~]# docker pull traefik:v1.7.2-alpine
[...]
[root@zdd211-200 ~]# docker tag add5fac61ae5 harbor.od.com/public/traefik:v1.7.2
[root@zdd211-200 ~]# docker push harbor.od.com/public/traefik:v1.7.2
</code>

Apply Manifests

<code># rbac.yaml (ServiceAccount & ClusterRole/Binding)
# ds.yaml (DaemonSet running Traefik on hostPort 81)
# svc.yaml (ClusterIP Service exposing ports 80 and 8080)
# ingress.yaml (Ingress object routing host traefik.od.com to the service)
</code>

Deploy from a node:

<code>[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml
serviceaccount/traefik-ingress-controller created
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ds.yaml
daemonset.extensions/traefik-ingress created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml
service/traefik-ingress-service created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml
ingress.extensions/traefik-web-ui created
</code>

Check pods:

<code>[root@zdd211-21 ~]# kubectl get pods -n kube-system
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6b6c4f9648-5sclg   1/1     Running   0          81m
traefik-ingress-9927c      1/1     Running   0          50m
traefik-ingress-sqt2n      1/1     Running   0          50m
</code>

Configure DNS for traefik.od.com

Add an A record pointing to the internal IP (10.211.55.10) in the

od.com

zone.

2.4 Kubernetes Dashboard

Prepare the dashboard image (mirrored to a private registry).

<code># Pull and push the image
[...]
</code>

Apply Dashboard Manifests

<code># rbac.yaml (ServiceAccount with cluster‑admin role)
# deployment.yaml (Dashboard deployment, auto‑generate certs)
# service.yaml (ClusterIP Service on port 443 -> targetPort 8443)
# ingress.yaml (Ingress using Traefik, host dashboard.od.com)
</code>

Deploy:

<code>[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/rbac.yaml
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/deployment.yaml
deployment.apps/kubernetes-dashboard created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/service.yaml
service/kubernetes-dashboard created

[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/dashboard_1.10.1/ingress.yaml
ingress.extensions/kubernetes-dashboard created
</code>

Obtain a login token:

<code>[root@zdd211-21 ~]# kubectl get secret -n kube-system | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-h68bw   kubernetes.io/service-account-token   3      17m

[root@zdd211-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-h68bw -n kube-system | grep token
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9... (truncated)
</code>

Access

https://dashboard.od.com

via the Nginx reverse‑proxy (SSL certificates installed) and log in with the token.

2.5 Heapster Monitoring Add‑on

Pull the Heapster image and push to the private registry.

<code>[root@zdd211-200 ~]# docker pull quay.io/bitnami/heapster:1.5.4
[...]
[root@zdd211-200 ~]# docker tag c359b95ad38b harbor.od.com/public/heapster:1.5.4
[root@zdd211-200 ~]# docker push harbor.od.com/public/heapster:1.5.4
</code>

Apply Heapster Manifests

<code># heapster.yaml (ServiceAccount, ClusterRoleBinding, Deployment, Service)
</code>
<code>[root@zdd211-21 ~]# kubectl apply -f http://k8s-yaml.od.com/heapster/heapster.yaml
service/heapster created
deployment.apps/heapster created
</code>

Heapster now provides metrics that the dashboard can display.

Dashboard metrics view
Dashboard metrics view
Traefik UI
Traefik UI
Dashboard login page
Dashboard login page
cliKubernetesdashboardCoreDNSDeclarativeFlannelTraefikCore Resources
Ops Development Stories
Written by

Ops Development Stories

Maintained by a like‑minded team, covering both operations and development. Topics span Linux ops, DevOps toolchain, Kubernetes containerization, monitoring, log collection, network security, and Python or Go development. Team members: Qiao Ke, wanger, Dong Ge, Su Xin, Hua Zai, Zheng Ge, Teacher Xia.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.