Cloud Native 9 min read

How to Run Multiple Containers Sequentially in a Single Kubernetes Pod

This article explains how to execute several containers one after another within a single Kubernetes pod by leveraging initContainers and native Job mechanisms, compares alternative solutions such as Volcano and Argo, provides complete YAML examples, and discusses practical considerations like volume sharing, security contexts, and timeout settings.

Efficient Ops
Efficient Ops
Efficient Ops
How to Run Multiple Containers Sequentially in a Single Kubernetes Pod

Sometimes you need to run multiple containers in order inside a single Kubernetes pod. A Kubernetes Job can run multiple containers, but they run concurrently by default.

One way is to handle ordering at the application level, e.g., using a shared local volume with file locks, but that adds complexity. Using Kubernetes primitives can simplify the task.

Kubernetes Job with initContainers

Although containers cannot run sequentially, initContainers run one after another before the main containers start. By placing earlier tasks in initContainers and the final task in the regular containers, you can achieve sequential execution.

<code>apiVersion: batch/v1
kind: Job
metadata:
  name: sequential-jobs
spec:
  backoffLimit: 0
  ttlSecondsAfterFinished: 3600
  template:
    spec:
      activeDeadlinesSeconds: 60
      restartPolicy: Never
      initContainers:
        - name: job-1
          image: alpine:3.11
          command: ["sh", "-c", "for i in 1 2 3; do echo \"job-1 `date`\"; sleep 1s; done; echo code > /srv/input/code"]
          volumeMounts:
            - name: input
              mountPath: /srv/input/
        - name: job-2
          image: alpine:3.11
          command: ["sh", "-c", "for i in 1 2 3; do echo \"job-2 `date`\"; sleep 1s; done; cat /srv/input/code && echo artifact > /srv/input/output/artifact"]
          resources:
            requests:
              cpu: 3
          volumeMounts:
            - name: input
              mountPath: /srv/input/
            - name: output
              mountPath: /srv/input/output/
      containers:
        - name: job-3
          image: alpine:3.11
          command: ["sh", "-c", "echo \"job-1 and job-2 completed\"; sleep 3s; cat /srv/output/artifact"]
          volumeMounts:
            - name: output
              mountPath: /srv/output/
      volumes:
        - name: input
          emptyDir: {}
        - name: output
          emptyDir: {}
      securityContext:
        runAsUser: 2000
        runAsGroup: 2000
        fsGroup: 2000
</code>
backoffLimit: 0

prevents the Job from retrying on failure.

volumes

define

input

and

output

emptyDir volumes for data exchange.

securityContext

runs volume operations with a non‑root UID to avoid root dependencies.

activeDeadlinesSeconds

sets a timeout for the Pod.

ttlSecondsAfterFinished

controls when the Job is automatically deleted.

Sample logs after execution:

<code>$ kubectl logs sequential-jobs-r4725 job-1
job-1 Tue Jul 28 07:50:10 UTC 2020
job-1 Tue Jul 28 07:50:11 UTC 2020
job-1 Tue Jul 28 07:50:12 UTC 2020
$ kubectl logs sequential-jobs-r4725 job-2
job-2 Tue Jul 28 07:50:13 UTC 2020
job-2 Tue Jul 28 07:50:14 UTC 2020
job-2 Tue Jul 28 07:50:15 UTC 2020
code
$ kubectl logs sequential-jobs-r4725 job-3
job-1 and job-2 completed
artifact
</code>

Volcano

Volcano (formerly kube‑batch) adds a

tasks

layer but still cannot enforce container order. The YAML is similar to the native Job with an extra tasks section.

<code>apiVersion: batch.volcano.sh/v1alpha1
kind: Job
metadata:
  name: volcano-sequential-jobs
spec:
  minAvailable: 1
  schedulerName: volcano
  queue: default
  tasks:
    - replicas: 1
      name: "task-1"
      template:
        spec:
          restartPolicy: Never
          initContainers:
            - name: job-1
              image: alpine:3.11
              command: ["sh", "-c", "for i in 1 2 3; do echo \"job-1 `date`\"; sleep 1s; done; echo code > /srv/input/code"]
              volumeMounts:
                - name: input
                  mountPath: /srv/input/
            - name: job-2
              image: alpine:3.11
              command: ["sh", "-c", "for i in 1 2 3; do echo \"job-2 `date`\"; sleep 1s; done; cat /srv/input/code && echo artifact > /srv/input/output/artifact"]
              resources:
                requests:
                  cpu: 3
              volumeMounts:
                - name: input
                  mountPath: /srv/input/
                - name: output
                  mountPath: /srv/input/output/
          containers:
            - name: job-done
              image: alpine:3.11
              command: ["sh", "-c", "echo \"job-1 and job-2 completed\"; sleep 3s; cat /srv/output/artifact"]
              volumeMounts:
                - name: output
                  mountPath: /srv/output/
          volumes:
            - name: input
              emptyDir: {}
            - name: output
              emptyDir: {}
          securityContext:
            runAsUser: 2000
            runAsGroup: 2000
            fsGroup: 2000
</code>

Compared to the native Job, Volcano adds the

tasks

abstraction but offers no functional advantage for sequential execution.

Sample logs are analogous to the native Job logs.

Argo

Argo Workflows can model sequential dependencies, but each task runs in a separate pod. This requires shared storage (e.g., NFS) for data exchange, which may not meet performance needs for I/O‑intensive workloads.

Conclusion

While Argo avoids the use of

initContainers

, its separate‑pod model makes it unsuitable for this scenario. The native Kubernetes Job with initContainers remains the most straightforward solution, and Volcano may be considered after further investigation.

KubernetesyamlJobVolcanoArgoInitContainerssequential
Efficient Ops
Written by

Efficient Ops

This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.