Restartable Init Containers as Sidecar Feature in Kubernetes 1.28
This article explains the new sidecar feature in Kubernetes 1.28 that adds a restartPolicy field to init containers, describes its behavior, provides usage examples, discusses when to adopt it, outlines migration steps, lists known issues, and invites community feedback.
This article introduces the new sidecar feature in Kubernetes 1.28, which allows init containers to be restartable by adding a restartPolicy field to the SidecarContainers feature gate.
The sidecar concept has been part of Kubernetes since 2015, originally used as an auxiliary container to extend or enhance the main container, often for networking or logging.
What are sidecar containers in Kubernetes 1.28?
Kubernetes 1.28 adds a restartPolicy field to init containers, usable when the SidecarContainers feature gate is enabled.
apiVersion: v1
kind: Pod
spec:
initContainers:
- name: secret-fetch
image: secret-fetch:1.0
- name: network-proxy
image: network-proxy:1.0
restartPolicy: Always
containers:
...If set, the only valid value is Always . This changes init container behavior to restart on exit, start subsequent init containers after a successful startupProbe , and include their resources in the pod’s total resource requests.
If the container exits, it restarts.
All later init containers start immediately after the startupProbe succeeds, rather than waiting for the restartable init container to exit.
Pod resource calculations now include resources of restartable init containers.
Pod termination continues to be based solely on the main containers; init containers with restartPolicy: Always (called Sidecars) do not prevent pod termination after the main containers exit.
Restartable init containers are well suited for sidecar deployment patterns because they have a defined start order, do not extend pod lifetime, and automatically restart on failure, improving reliability.
When to use sidecar containers
Typical workloads that benefit include:
Batch or AI/ML jobs that run for a limited time.
Network proxies that must start before all other containers (see the Istio native sidecar blog).
Log collection containers that should start early and run until pod termination.
Jobs where the sidecar should not block job completion.
How users implemented sidecar behavior before 1.28
Prior approaches:
Using init containers with a shorter lifecycle than the pod, requiring the sidecar to exit before other containers start.
Running sidecars as regular containers sharing the pod lifecycle, which could block pod termination.
The built‑in sidecar feature solves the latter case, providing start‑order control and not blocking pod termination.
Transitioning existing sidecars to the new model
During the Alpha phase, enable the feature gate on short‑lived test clusters. Move existing sidecars to the initContainers section and set restartPolicy: Always . In many cases they will continue to work with added benefits.
Known issues
Alpha‑stage known issues to be addressed before Beta:
CPU, memory, device, and topology manager do not account for sidecar resources, leading to under‑estimation of pod resource usage.
kubectl describe node reports lower resource usage because it ignores sidecar containers.
We need your feedback!
During Alpha, try the sidecar containers in your environment and file issues for errors or unexpected behavior. We are especially interested in feedback on shutdown ordering, restart back‑off timeout adjustments, and readiness/liveness probe behavior for sidecars.
File issues at the Kubernetes GitHub repository .
What’s next?
Beyond fixing known issues, we are working on adding termination ordering so sidecars terminate only after the main containers exit.
We look forward to community input on this new sidecar capability.
Acknowledgements
Thanks to the many contributors listed in the original KEP and community members who helped design, review, and implement this feature.
More information
Read the Kubernetes documentation on the sidecar container API.
Read the Sidecar KEP.
Cloud Native Technology Community
The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.