Operations 10 min read

Implementation Plan and Results of API Automation Testing

This article outlines the background, step‑by‑step implementation plan, execution mechanisms, sustainable maintenance practices, and measurable outcomes of introducing API automation testing to ensure high‑quality iterative development and stable online services.

转转QA
转转QA
转转QA
Implementation Plan and Results of API Automation Testing

1. Background

With rapid business iteration, it is necessary to ensure the quality of daily iteration requirements while also guaranteeing the correctness of existing online logic, continuously maintaining overall system accuracy. Therefore, pre‑release historical functional regression testing and regular online system checks are critical, highlighting the importance of API automation testing in practice. Accumulated automated test cases enable effective regression of core business processes and improve testing efficiency.

2. Implementation Plan

Based on the current business situation, we define focus points for different stages. The overall process is as follows:

First, select an appropriate API automation tool for the business scenario and run the process. Then focus on covering business logic and writing API test cases, continuously accumulating them. Once a set of cases is accumulated, address automated execution, including trigger timing, stability, and failure tracking. Finally, establish a sustainable maintenance mechanism to ensure the solution can be embedded in the business and continuously guarantee service quality.

1. API Automation Tool Selection

Currently, API automation mainly includes two scenarios:

(1) Simple scenario: single‑interface testing

(2) Complex scenario: requires pre/post data construction, dirty data handling, and complex assertions.

Based on these, we compared two commonly used tools and decided on the final solution.

✅ Final solution: use the internal API testing platform combined with code‑engine assistance.

(1) Simple scenario: directly assemble cases using the API testing platform.

(2) Complex scenario: for complex assertions or pre/post conditions, encapsulate common test/assertion interfaces in a code project, then configure those interfaces in the testing platform.

2. API Test Case Accumulation

This stage focuses on two aspects: efficiently creating high‑quality API cases (including supplementing historical core cases and maintaining new demand cases) and assembling case collections according to business clusters and environments.

2.1 Create Cases

(1) Historical case supplementation: Identify core interfaces based on business logic, call volume, and coverage, and gradually add cases according to importance.

(2) New demand case maintenance: Mark which cases can be automated and add new automated cases.

2.2 Assemble Case Scenarios (Create Case Sets)

We defined guidelines for creating case sets, including clustering and environment segregation.

(1) Cluster‑based splitting: group interfaces of the same cluster into one case set to trigger regression for core functions after deployment.

(2) Environment‑based splitting: separate case sets for online, sandbox, and test environments to meet different configuration and user data requirements.

3. Automated Execution of API Cases

With accumulated cases and configured case sets, we need to ensure execution meets automatic regression and service inspection goals, triggering at appropriate times and maintaining reliability.

3.1 Trigger Timing

(1) Automatic execution after successful deployment in test/sandbox environments to regress core functions after each branch deployment.

(2) Online environment periodic inspection: configure a task to run every 5 minutes, monitoring online services to ensure stability.

3.2 Improving Execution Stability

Stability improvement occurs in two stages:

Stage 1 – Enhance case stability: Failed cases are reported to the responsible QA cluster, which addresses case brittleness, platform issues, or environment problems, iteratively improving stability. Once stable, results are shared with developers for joint maintenance.

Stage 2 – Enhance service stability: After case stability, execution results are communicated to both developers and QA, who jointly monitor outcomes, quickly locate and resolve issues, and ensure service reliability.

4. Sustainable Maintenance Mechanism

4.1 Case Set Supplementation Mechanism

On a weekly basis, new business requirements are reported to QA and development leads, who assess whether any interface automation cases need to be added or were missed.

4.2 Issue Tracking Mechanism

Testing and development jointly maintain failed cases, track and resolve them promptly, and continuously optimize the process.

(1) Assign cluster owners to locate and resolve failed cases, recording issues for root‑cause analysis.

(2) QA regularly conducts retrospectives on API automation usage, reviewing recent data, usage patterns, and encountered problems.

3. Results

Through the implementation of API automation testing, we achieved the following outcomes:

(1) Core interface coverage for key clusters reached 100%; daily iteration core interface changes no longer require manual regression and can rely entirely on automation.

(2) Online service quality is ensured, with issues detected and resolved promptly.

(3) The cost of converting new demand cases to automation is low, effectively controlling R&D resource investment.

In the past six months, the success rate of executions has remained around 99.6%, with approximately 10 online bugs discovered; 73% of these were due to unstable external services, prompting the establishment of communication mechanisms to drive regular external resolution.

API automation significantly improves regression efficiency, saves testing manpower, and ensures online service stability.

automationquality assuranceContinuous IntegrationregressionAPI Testing
转转QA
Written by

转转QA

In the era of knowledge sharing, discover 转转QA from a new perspective.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.