System Integration Testing (SIT) Strategies for Large‑Scale Agile Projects
This article explores how large‑scale agile projects can effectively conduct System Integration Testing (SIT) through layered testing strategies, organized team structures, phased integration rhythms, test planning, case design, branch management, interface change handling, and evolving QA responsibilities.
For large‑scale products, even when using agile methods, the integration of multiple services and other products is inevitable; this article discusses how to perform System Integration Testing (SIT) in large‑scale agile testing.
Layered testing strategy : With the popularity of distributed architectures, testing is organized in layers—from testing each service individually, to integrating multiple services within a module, to basic testing across modules within a product, and finally end‑to‑end integration across multiple products.
1) In‑iteration testing focuses on two aspects: service functional testing and intra‑module service integration.
2) SIT integration testing is divided into two phases. The first phase, called product‑internal SIT (self‑test), concentrates on integration between services within the same product, using mock servers to isolate third‑party dependencies. The second phase, called product‑to‑product SIT (cross‑product or "拉通" testing), focuses on interface integration between different products, often involving multiple teams and organizations.
Organization methods :
Virtual SIT team: each Scrum team appoints a SIT interface person who bridges the overall SIT effort and the team; a SIT Lead coordinates overall organization, strategy, and standards; Scrum teams execute the tests.
Independent SIT team: a dedicated team responsible solely for system integration testing, loosely connected to Scrum teams; a SIT Lead manages organization, coordination, and knowledge transfer.
Advantages of the virtual team include flexibility, higher priority for SIT tasks, and transparent knowledge sharing. Drawbacks are resource conflicts with iteration work and frequent role switching. The independent team offers strong execution power, dedicated resources, and isolation from iteration impact, but incurs higher collaboration cost, potential silo effects, and requires highly skilled members.
Choosing between the two depends on team bandwidth and product maturity: teams with strong collaboration can adopt the virtual model, while early‑stage products with high integration risk may prefer an independent SIT team.
Regardless of the model, a unified set of principles, test strategy, issue‑response mechanisms, and test‑management standards is essential for coordinated integration testing.
SIT integration test rhythm is influenced by iteration cadence and product complexity. A four‑phase approach is recommended:
Phase 1 – MVP integration: first integration after completing the highest‑priority (P0) MVP.
Phase 2 – Large‑requirement integration: after completing the next priority (P1) large requirements.
Phase 3 – Iteration‑based integration: smaller subsequent requirements are integrated each iteration.
Phase 4 – Demand‑driven integration: new small demands discovered during testing are integrated as needed.
Test planning follows three steps: test planning & preparation, test execution & monitoring, and test closure & summary, each with specific activities.
Test case design differs for SIT self‑test and SIT cross‑product testing. Self‑test cases are split into user‑viewpoint scenarios reviewed by QA and PO, and QA‑driven boundary/exception tests. Cross‑product cases focus on end‑to‑end business flows, avoiding combinatorial explosion by targeting core scenarios that span multiple products.
Branch strategy and issue fixing : a trunk‑based development model is recommended, supplemented by two additional branches—one for SIT cross‑product integration and one for SIT self‑test—while the main trunk continues iteration development. Issues are fixed where discovered and then cherry‑picked to other branches.
Interface change handling : document interfaces and retain email records. Distinguish blocking scenarios (immediate fix regardless of defect or change) from normal scenarios (record, prioritize, schedule).
QA role transformation in integration testing:
Test Coach: empower PO with business‑level explanations, configuration guidance, and mock usage documentation.
Problem‑solving Agent: analyze, clarify, and route issues to developers or business analysts, driving rapid resolution.
Value Guardian: protect product quality and business value, balance technical cost and user impact, and continuously adjust test strategy based on feedback.
These evolving responsibilities enable QA to act as a bridge between development, product, and business, ensuring smooth and efficient integration testing.
Author: Zhang Haiyun Source: Thoughtworks Insights
DevOps
Share premium content and events on trends, applications, and practices in development efficiency, AI and related technologies. The IDCF International DevOps Coach Federation trains end‑to‑end development‑efficiency talent, linking high‑performance organizations and individuals to achieve excellence.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.