Cloud Native 14 min read

Mastering Kubernetes Probes: Liveness, Readiness, and Startup Explained

This article explains why Kubernetes health probes are essential, describes the three probe types (liveness, readiness, startup), their checking methods, configuration options, provides complete YAML examples, demonstrates testing scenarios, and outlines additional mechanisms that ensure container availability in a cloud‑native environment.

Efficient Ops
Efficient Ops
Efficient Ops
Mastering Kubernetes Probes: Liveness, Readiness, and Startup Explained

Introduction

Sometimes an application becomes unresponsive due to an infinite loop or deadlock. To ensure the application can be restarted in such cases, a mechanism is needed to check the application's health from the outside rather than relying on internal checks.

Kubernetes probes

Kubernetes provides three types of probes for this purpose:

Liveness probe : checks whether the container is still running. If it fails, K8s marks the container as dead and attempts to restart it.

Readiness probe : checks whether the container is ready to receive traffic. If it is not ready, K8s will not route traffic to the container.

Startup probe : checks whether the container has finished starting. Unlike the liveness probe, the startup probe runs only once during container start‑up.

Probe check methods

exec : runs a specified command inside the container and treats an exit code of 0 as success.

httpGet : sends an HTTP GET request to the container’s IP, port and path; a response status code between 200 and 399 is considered healthy.

tcpSocket : attempts to open a TCP connection to the container’s IP and port; a successful connection indicates health.

Configuration options

initialDelaySeconds : time to wait before the first probe execution.

periodSeconds : interval between probe executions.

timeoutSeconds : probe timeout; exceeding this duration counts as failure.

successThreshold : minimum consecutive successes required.

failureThreshold : minimum consecutive failures required.

Startup probe example

<code>apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx-ready
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: Always
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
    startupProbe:
      failureThreshold: 3
      exec:
        command: ['/bin/sh','-c','echo Hello World']
      initialDelaySeconds: 3
      timeoutSeconds: 2
      periodSeconds: 1
      successThreshold: 1
  restartPolicy: Always</code>

Apply the manifest and verify the pod:

<code># kubectl apply -f pod.yaml
pod/nginx created
# kubectl get pod
NAME   READY   STATUS        RESTARTS   AGE
nginx  0/1     ContainerCreating   0   4s
# kubectl describe pod nginx</code>

Readiness probe example

<code># grep -v '^#' pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx-ready
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: Always
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 3
      timeoutSeconds: 2
      periodSeconds: 1
      successThreshold: 1
      failureThreshold: 3
  restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: ready-nodeport
  labels:
    name: ready-nodeport
spec:
  type: NodePort
  ports:
  - port: 88
    protocol: TCP
    targetPort: 80
    nodePort: 30880
  selector:
    app: nginx-ready</code>

After creating the pod and service, the application is reachable at the NodePort:

<code># kubectl get pod
NAME   READY   STATUS   RESTARTS   AGE
nginx  1/1     Running  0          15s
# curl http://192.168.10.10:30880
... (nginx welcome page) ...</code>

Changing the probe port to 81 simulates a failure; the readiness probe reports not ready and traffic is not routed:

<code># kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
nginx  0/1     Running   0          22s
# curl http://192.168.10.10:30880
curl: (7) Failed connect to 192.168.10.10:30880; Connection refused</code>

Liveness probe example

<code># grep -v '^#' pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx-ready
spec:
  containers:
  - name: nginx
    image: nginx:latest
    imagePullPolicy: Always
    ports:
    - name: http
      containerPort: 80
      protocol: TCP
    livenessProbe:
      httpGet:
        path: /
        port: 80
        scheme: HTTP
      initialDelaySeconds: 3
      timeoutSeconds: 2
      periodSeconds: 1
      successThreshold: 1
      failureThreshold: 3
  restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
  name: ready-nodeport
spec:
  type: NodePort
  ports:
  - port: 88
    protocol: TCP
    targetPort: 80
    nodePort: 30880
  selector:
    app: nginx-ready</code>

When the liveness probe fails repeatedly, the pod enters

CrashLoopBackOff

according to the

restartPolicy

(default

Always

).

Additional Kubernetes mechanisms for availability

RC

or

ReplicaSet

: ensures a specified number of pods are running.

Deployment : manages updates and provides rolling updates and rollback capabilities.

Service : gives a stable IP/DNS and load‑balances traffic to healthy pods.

Namespace : isolates resources within a cluster.

kubernetescontainerYAMLstartupProbesLivenessReadiness
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.