R&D Management 10 min read

Local Optimization vs Global Quality: Metrics, Bugs, and Team Capability

This article examines how fragmented metrics and local optimizations can harm overall software quality, discussing the bug‑vs‑feature debate, various measurement approaches, and the importance of viewing quality as a collective team capability rather than an individual or departmental responsibility.

DevOps
DevOps
DevOps
Local Optimization vs Global Quality: Metrics, Bugs, and Team Capability

The article explores the manifestation of quality capability and the derived measurement scopes and strategies.

Local optimization harms global interests

Test: "This is a bug, clearly not matching the expected result." Developer: "The requirement didn't specify it, so it's a new feature." Test: "It contradicts usage logic, please fix it." Developer: "Delete the wrong request; it's impossible to change without a requirement."

Measurement methods: developers count introduced defects to gauge code quality, while testers count discovered defects to gauge testing efficiency; unreasonable metrics cause conflicts of interest.

Test: "This bug appears on both product and order pages, I will file one bug." Developer: "No, they are different service pages, file two bugs." Test: "Okay." Project manager: "Why split them? They look the same." Developer: "They look the same but need separate fixes." Project manager: "Got it."

Another measurement: developers count defect fixes to assess workload, testers count discovered defects to assess testing efficiency. Even with cooperation, it is unclear whether this improves quality or merely inflates defect numbers.

Quality is a team-wide capability

Two viewpoints are presented: (1) quality belongs to the testing department; (2) quality is a collective team capability. The article explains how measurement design differs under each view.

If the first view is adopted, quality metrics focus on finding more defects, improving testing efficiency, and controlling escaped defects, ignoring non‑testing metrics such as requirement change rate or post‑deployment rollbacks.

If the second view is adopted, quality metrics aim to reduce introduced defects, enhance overall team effectiveness, and consider any metric that impacts team‑wide quality.

Although the first approach seems targeted and yields quick results, it does not truly improve quality because quality is not solely a tester’s skill; it is a systemic issue across the whole development process.

Thus, quality should be seen as a collective team ability. The failure of the three earlier stories’ metrics stems from a fragmented perspective that designs indicators only for developers without considering their impact on testing or the whole team, leading to short‑term local gains but long‑term global loss.

Junior: "Your thousand‑line defect rate is so low, teach me!" Senior: "How is the thousand‑line defect rate calculated?" Junior: "Defect count / (lines of code / 1000)" Senior: "It's basic math, C = A/B; to make C small, what should we do?" Junior: "...I get it, thanks!" Senior: "For the sake of a paycheck, we abandon principles."

Measurement: developers use thousand‑line defect rate to assess performance, but such poorly designed metrics are akin to drinking poison to quench thirst.

Before discussing why metrics fail, we must ask whose capability quality actually measures and what the measurement serves.

Quality is a team-wide capability

Two perspectives: (1) quality is the testing department’s responsibility, measured by defect discovery and escape rates; (2) quality is a shared responsibility, measured by defect reduction, overall efficiency, and impact of key events.

While the first seems more focused and yields quick improvements, it does not guarantee higher quality because swapping in a stronger testing team does not eliminate product defects; problems exist throughout the development lifecycle.

Therefore, consensus is that quality is a team capability. The earlier stories’ metric failures arise from siloed thinking—designing metrics only for developers without considering their effect on testing or the whole team—resulting in short‑term local optimization that harms overall quality.

When measuring defects from a fragmented view, teams compare defect submission counts across groups, creating division. A global view acknowledges differing technical and business complexities, making cross‑team comparison meaningless; instead, unified, positively incentivized metrics should be adopted to boost overall team quality.

Three stages illustrate different management mindsets, emphasizing respect and trust for individuals, shifting focus from merely counting bugs to understanding the real goals: smooth releases, good user experience, and genuine value rather than inflated defect numbers.

If we stop measuring defect counts, how do we show testers' workload? By using positive indicators such as time saved for users, avoided economic loss, or cost savings for customers—metrics that can be quantified and are more appealing to management.

Traditionally, metrics emphasized quantitative analysis; we should gradually shift toward qualitative analysis that highlights team capability and personal growth, embedding ability and culture into the organization.

#IDCF DevOps Hackathon Challenge – an end‑to‑end DevOps experience combining lean startup, agile development, and DevOps pipelines, held on February 25‑26, 2023 in Hangzhou, inviting both enterprise and individual participants.

R&D managementMetricssoftware qualityteam capabilitybug vs feature
DevOps
Written by

DevOps

Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.