Fundamentals 10 min read

Why Code Reuse May Not Boost Efficiency: A Critical Look at System Design

The article examines the common belief that reuse improves productivity, explains the logical fallacies behind it, highlights the hidden costs of over‑generalization, and proposes autonomy and consistency metrics as practical guidelines for evaluating when reuse truly adds value.

IT Architects Alliance
IT Architects Alliance
IT Architects Alliance
Why Code Reuse May Not Boost Efficiency: A Critical Look at System Design

In system design and coding, the term "reuse" is frequently heard, leading designers to abstract classes, functions, UI components, micro‑services, business platforms, organizational roles, or product features for multiple scenarios.

Abstract a class, function, or UI component for different contexts.

Abstract a micro‑service, making it as fine‑grained as possible for flexible composition.

Abstract a business middle platform to consolidate core functions for company‑wide use.

Abstract a functional vertical (backend, frontend, QA, finance) used by various products.

Reuse a previously designed feature when a new one is similar.

Why do we like reuse?

Main purpose: if a function already exists, using it saves effort and increases personal productivity.

Education: engineers are taught the DRY principle and the "Design Patterns" book emphasizes reusability.

Programming habit: object‑oriented languages encourage inheritance, whose primary goal is reuse.

What is the reasoning behind the belief that reuse improves efficiency?

Writing every line of a system from scratch requires a large amount of code.

Reusing existing code reduces the amount of new code we need to write.

The more code we can reuse, the less we have to write.

Writing less code lets us finish the system earlier.

This reasoning contains two common fallacies:

All code (features) takes the same amount of time to write.

Writing code is the most time‑consuming part of building a system.

If a module depends on many other reusable modules, developers must understand each module’s interface; documentation alone is often insufficient, and mismatched interfaces force trade‑offs between integration effort and schedule.

There is also no standard for abstracting a public business module; the module owner often becomes an outsourced function, reducing efficiency.

Maintenance, not coding, consumes the majority of time in production systems. More dependent modules increase debugging, deployment, and iteration difficulty.

Should we write our own versions of frameworks, algorithm libraries, etc.? No. We routinely use languages (Java, Go), frameworks, and libraries that are widely accepted standards and sufficiently generic; reuse should be based on consensus and standards, not arbitrary copying.

Is making a system highly generic (e.g., many CRUD micro‑services) the best way to increase reuse? Not necessarily. Over‑generalization can be likened to organ transplantation rather than LEGO blocks, leading to integration friction, reduced speed, and instability. Common pitfalls include:

Hundreds of fields in a single table without clear scenario relevance.

Designing systems around nouns, resulting in business middle platforms that cause inter‑departmental disputes.

Proliferation of tiny micro‑services requiring a complex orchestration layer (gateway, DSL), increasing deployment complexity and creating single points of failure.

Good design should follow the advice from "Hints for Computer System Design": "Use a good idea again instead of generalizing it. A specialized implementation may be much more effective than a general one."

What are the standards for good system design? Which dimensions have priority?

Two evaluation criteria are autonomy and consistency:

Autonomy: the degree to which a module is unaffected by others; stable interfaces can be measured with AutonomyMetrics[1] .

Consistency: the extent to which the same module is used uniformly across the system; over‑abstraction can be assessed with ConsistencyMetrics[2] .

When deciding whether to extract a common module from multiple business components, consider whether the shared logic needs to change together; if not, do not abstract it.

When torn between autonomy and consistency, prioritize autonomy.

Summary

In most cases, core system complexity does not decrease—it merely shifts. Writing all code yourself is not faster than dividing work among people; division of labor is essential. Key takeaways:

Avoid arbitrary generalization for the sake of reuse; use ConsistencyMetrics[2] to judge whether abstraction is justified.

Do not reuse modules indiscriminately; prioritize autonomy using AutonomyMetrics[1] , and ensure consensus and standards before reusing.

Related links:

https://autonomy.design/Part1/AutonomyMetrics.html

https://autonomy.design/Part1/ConsistencyMetrics.html

Reference links:

https://zhuanlan.zhihu.com/p/356202989

https://udidahan.com/2009/06/07/the-fallacy-of-reuse/

https://zhuanlan.zhihu.com/p/138145081

https://zhuanlan.zhihu.com/p/410049005

software architecturesystem designconsistencydesign principlesreuseautonomy
IT Architects Alliance
Written by

IT Architects Alliance

Discussion and exchange on system, internet, large‑scale distributed, high‑availability, and high‑performance architectures, as well as big data, machine learning, AI, and architecture adjustments with internet technologies. Includes real‑world large‑scale architecture case studies. Open to architects who have ideas and enjoy sharing.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.