Managing Complexity in Software Architecture
The article argues that complexity in software systems is unavoidable and must be consciously placed, managed, and reduced through thoughtful architectural decisions, prioritizing high‑frequency changes, eliminating duplication, and accepting that perfect local consequences are impossible.
Recently I read a highly‑up‑voted article (originally on lobsters) that states complexity is immortal; it can only be shifted between parts of a system, and library authors must accept that making the library easy for users inevitably makes the library itself more complex.
As the article puts it, “Complexity has to live somewhere. If you are lucky, it lives in well‑defined places,” meaning we must acknowledge complexity’s existence and manage it in places everyone understands.
Two common ways complexity leaks are: runtime costs that cannot be hidden, and failures that force us to “open the hood” of the system, exposing low‑level details such as kernel bugs or hardware driver issues.
Consequently, when a component breaks, either users must fix it themselves (and blame the author) or wait for the author to intervene (and also be blamed), reinforcing the notion that complexity merely moves around.
Another form of complexity is the “I‑feel” complexity, where code feels like spaghetti due to human‑brain processing limits; Kent Beck’s “Local Consequence” principle highlights the desire for changes to have limited, localized impact.
Achieving true local consequence for every change is impossible; we must instead rely on experience to identify high‑frequency modifications and make those as easy as possible, even though predicting future changes is difficult.
Despite these limits, layering and partial complexity transfer remain valuable: understanding TCP/IP stacks or Linux timers is rare, yet shifting complexity there saves time in most cases. Recognizing where complexity can be moved, even slightly, is worthwhile.
Eliminating duplication is a powerful way to reduce complexity. Successful examples include: forward/backward passes in neural networks (PyTorch), Vue’s dependency‑tracking re‑execution, React’s single render function with virtual‑DOM diffing, Airtable’s form CRUD generation, and gRPC’s code generation for RPC structs.
If no duplication is found, the effort may only shift complexity; however, discovering inherent duplication can inspire new frameworks or patterns that eradicate that duplication, yielding overall complexity reduction.
Trying multiple approaches is essential—sometimes a paradigm shift (e.g., from jQuery to Vue) is required to truly eliminate duplication and achieve more local consequences.
Ultimately, software architecture may have diminishing returns, but there remains ample unexplored space for better designs, especially when we focus on reducing duplication and managing inevitable complexity.
Architect
Professional architect sharing high‑quality architecture insights. Topics include high‑availability, high‑performance, high‑stability architectures, big data, machine learning, Java, system and distributed architecture, AI, and practical large‑scale architecture case studies. Open to ideas‑driven architects who enjoy sharing and learning.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.