Operations 19 min read

Understanding and Overcoming the DevOps Adoption Gap

The article analyses why many organizations stall during DevOps adoption, explains the underlying causes, discusses metric design and second‑order changes, and proposes organizational and technical strategies to cross the gap and achieve sustainable continuous delivery.

Continuous Delivery 2.0
Continuous Delivery 2.0
Continuous Delivery 2.0
Understanding and Overcoming the DevOps Adoption Gap
In a previous article "DevOps Implementation Smoke‑Pipe Curve" we noted that only a few enterprises can quickly cross the "gap" of the curve and move onto continuous improvement; this piece explores how those enterprises succeed.

1

Reasons for the Gap

When an organization declares "we want DevOps", it immediately brings in tools, hires DevOps engineers, and automates existing processes (e.g., automatic test requests, notification emails). This yields some early benefits, and once process optimization reaches a certain level the focus shifts to "test acceleration", prompting the adoption of automated testing.

The first problem encountered in automation testing is the automation of test‑environment provisioning; the benefit gained depends on how painful the manual preparation used to be.

After the environment is ready, testers start writing automated tests. The first two months look promising, but soon the curve reaches its first inflection point. Many teams, unaware of the slowdown, continue adding more automated test cases, only to see diminishing returns later.

At this stage two common choices appear: continue the same approach (benefits may decline) or stay at the current plateau, essentially remaining at a platform‑stage point on the curve.

DevOps implementation strategy at this point is to "pick low‑hanging fruit": (1) avoid large, disruptive changes; (2) start with the easiest tasks.

Why does the curve have an initial improvement period? Because process automation reduces transactional cost in each delivery iteration. From the end‑user perspective, each software iteration consists of value‑adding activities and non‑value (transactional) activities such as testing, planning meetings, stand‑ups, etc. By lowering transactional cost while maintaining quality, the proportion of value‑adding time increases, allowing shorter iteration cycles. However, after a certain point the ratio of value to non‑value activities returns to its original level, frustrating teams.

The reason is that continuous delivery requires many automated test cases, which are usually owned by the testing department. Early on the tests are end‑to‑end; as their number grows, maintenance cost, effort, and diagnostic cost become high, leading to a platform plateau where the cost‑benefit ratio no longer improves.

End‑to‑end automated tests are expensive to run, maintain, and diagnose, and many testers lack the skills to write lower‑level automated tests. Even when they have the skills, collaboration processes and culture often prevent timely access to the information needed to improve low‑level tests. This is why a second‑order change is required to break through the gap.

2

Two Key Considerations for Implementing DevOps

First: Correct Understanding of Metrics

Metrics have three attributes: observability, actionability, and timeliness. Timeliness splits into leading and lagging indicators. Some metrics (e.g., defects per KLOC) are observable but lagging and not directly actionable. Leading metrics such as code‑review count or coding‑standard compliance can guide behavior and are actionable.

Another example: the number of defects found between test and release is a lagging metric, while the number of self‑test cases is a leading, actionable metric that is hypothesized to reduce post‑test defects, though many factors influence the relationship.

Software engineering struggles with measuring development efficiency; no universally accepted metric exists because effectiveness depends on organizational capability, product type, and skill levels. Metrics closer to the right side of the diagram are more process‑guiding (leading), while those on the left are outcome‑based (lagging), and their interactions can be complex.

When designing improvement plans, one must consider how changes in one metric affect others, the magnitude of impact, and the delay effect of leading indicators. Experience suggests focusing on a few well‑chosen metrics.

Collecting metric data incurs cost; if no foundation exists, initial investment can be large. Sampling can provide early, low‑cost data, though it may introduce bias and risk.

Second: Seek System‑Structure Change (Second‑Order Change)

The book "Change" describes first‑order change (adjustments without altering system structure) and second‑order change (altering the structure itself to achieve qualitative improvement). Applying this to DevOps means changing who owns automated test cases—from testers to developers.

When developers become the primary users of automated tests, four principles must be met: fast execution, convenient one‑click run, trustworthy results, and timely availability (the test should be ready when the developer finishes a feature).

Fast – each test case must run very quickly.

Convenient – developers should be able to run a chosen test suite with a single click.

Trustworthy – results must be reliable.

Timely – test cases should be prepared as soon as the related feature is ready.

Changing the user of test cases constitutes a second‑order change; it requires a shift in mindset and workflow.

To meet these principles, many test cases need to move down the testing pyramid—from end‑to‑end to system‑integration, component, and even unit tests – which may change roles and responsibilities, breaking existing workflows but enabling new improvements.

This shift can improve the state of automated testing and create a virtuous positive‑triangle effect.

So far we have discussed the "software delivery" side of continuous delivery (the fast‑verification loop). From a product perspective, this loop is incomplete because it starts with a ready requirement, not the problem origin. The full delivery loop begins with a business question, then anchors the expected impact, co‑creates solutions, refines the chosen approach, and finally enters the fast‑verification loop.

3

Where is DevOps Heading?

Envisioning the future, DevOps promotes a common language that enables tighter collaboration across roles, thereby improving delivery efficiency. Shared goals foster this common language.

For enterprises, the most meaningful target is the business goal. Many IT organizations are structured around functional goals rather than business objectives; to achieve business goals, resources must be organized around those goals. No single structure is perfect, but business‑oriented teams are generally more effective.

“Organization walls” always exist; breaking functional silos creates new "business‑unit walls". As long as these walls are easy to break and rebuild, the organization remains healthy.

A flexible yet powerful elastic organization consists of many small business units that can quickly respond to market changes. Not only external‑customer‑facing teams, but also internal service teams (e.g., infrastructure) should be treated as business units based on the customers they serve.

When many business units exist, aligning them becomes a huge challenge, especially when they must collaborate. The popular "mid‑platform" architecture appears to conflict with small‑unit design, but in reality the mid‑platform itself should be composed of many aligned small units, similar to micro‑service architecture.

One possible way to achieve efficient alignment is to learn from "Amoeba Management" pioneered by Kazuo Inamori at Kyocera and later applied to Japan Airlines. The approach splits the organization into small, autonomous profit‑centers (amoebas) that manage their own P&L, activating employees to own their goals.

To accomplish such transformation, three aspects must be addressed: software architecture, organizational mechanisms, and collaboration infrastructure (as illustrated in the diagram). The book "Continuous Delivery 2.0" details three case studies that explore different implementation paths based on scale, business goals, and product shape.

4

Summary

Optimizing a system without changing its structure is relatively easy, but once optimization reaches a certain level, a second‑order change is often required to achieve a breakthrough and reach the next level of performance.

乔梁老师开课啦~视频课程 《持续部署训练营(Python版)》 , 限时特价!

Your software development efficiency enough? Quality enough? How long does it take to push a production change to users? Are you stressed about delivery quality? Have you ever faced deployment failures or downtime? Continuous deployment can eliminate these pains, letting you focus on high‑value customer needs. This course combines theory and practice to teach you how to build‑test‑deploy pipelines, monitor deployments, roll out features gradually, and handle database schema changes.

DevOpsmetricsContinuous Deliveryorganizational designSecond-Order Change
Continuous Delivery 2.0
Written by

Continuous Delivery 2.0

Tech and case studies on organizational management, team management, and engineering efficiency

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.