Cloud Native 7 min read

2023 Kubernetes Reliability Benchmark Highlights Common Configuration Gaps

The 2023 Fairwinds Kubernetes benchmark, analyzing over 150,000 workloads, reveals that many organizations still miss critical best‑practice configurations such as memory limits, liveness probes, proper image pull policies, replica counts, and CPU limits or requests, leading to increased security risks, uncontrolled cloud costs, and reduced reliability.

Cloud Native Technology Community
Cloud Native Technology Community
Cloud Native Technology Community
2023 Kubernetes Reliability Benchmark Highlights Common Configuration Gaps

Cloud computing continues to be the preferred platform for building applications and services, and despite economic uncertainty, most organizations expect their cloud usage and spend this year to meet or exceed plans, while seeking ways to control high costs and maintain reliable Kubernetes workloads.

Fairwinds analyzed more than 150,000 workloads from hundreds of organizations in its 2023 Kubernetes Benchmark Report, comparing 2022 data with the previous year. The report shows that although Kubernetes adoption is growing, adherence to best‑practice configurations remains a challenge, often resulting in higher security risk, uncontrolled cloud spend, and reduced reliability of cloud applications and services.

1. Missing Memory Limits and Requests – While 41% of organizations set memory limits and requests for over 90% of workloads in 2021, only 17% do so in the latest report, reflecting a lack of visibility or knowledge about appropriate values.

2. Missing Liveness and Readiness Probes – 83% of organizations have more than 10% of workloads without liveness or readiness probes, up from 65% the previous year, leaving failing pods to consume resources and cause errors.

3. Image Pull Policy Not Set to Always – 25% of organizations now rely heavily on cached images for most workloads, a rise from 15% last year, which can cause version drift and reliability problems.

4. Missing Deployment Replicas – 25% of organizations have over half of their workloads running with a single replica, increasing risk of downtime when a node fails.

5. Missing CPU Limits – 86% of organizations have more than 10% of workloads lacking CPU limits, up from 36% in 2021, allowing containers to consume all node CPU and degrade performance.

6. Missing CPU Requests – 78% of organizations now have over 10% of workloads without CPU requests, up from 50% previously, which can starve other pods of necessary resources.

Kubernetes offers great value and flexibility, but its complexity makes proper configuration challenging; the benchmark helps organizations identify gaps and implement changes to make deployments safer, more reliable, and more efficient.

Article translated from cloudnativenow.com.

cloud-nativekubernetesbest practicesReliabilitybenchmarkresource limits
Cloud Native Technology Community
Written by

Cloud Native Technology Community

The Cloud Native Technology Community, part of the CNBPA Cloud Native Technology Practice Alliance, focuses on evangelizing cutting‑edge cloud‑native technologies and practical implementations. It shares in‑depth content, case studies, and event/meetup information on containers, Kubernetes, DevOps, Service Mesh, and other cloud‑native tech, along with updates from the CNBPA alliance.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.