How to Start API Testing: Defining Purpose, Selecting Targets, and Designing Test Cases
This article explains how to begin API testing by clarifying its purpose, choosing critical interfaces, determining required system functions, and outlining a practical test design that covers environments, data, functional points, execution, and result verification.
The author recently used the company's internal interface testing platform to write test cases and reflect on the challenges encountered, hoping to provide useful insights for readers.
How to start API testing? First, clearly define the purpose of API testing, which is to verify data exchange, transmission, and inter‑system dependencies between external systems and the system under test.
Second, select the test objects. Because a system may have countless interfaces, the author categorizes them into two major groups: write (incoming) interfaces that simulate external input by changing parameters, and read (outgoing) interfaces that validate the state, values, and status of data flowing out.
Third, determine the system functions to be validated. By understanding what functionality the system offers to users and what users truly need, testers can filter relevant interfaces and design appropriate test cases.
With the purpose, objects, and functions defined, the article proceeds to the design of API testing.
API testing environments are divided into a test environment and a production environment. All types of API tests can be executed in the test environment, including calls to production APIs with test data. The production environment is mainly used to monitor output values and states to avoid affecting real users.
Test data consists of interface parameters and internal system data required for case execution. Parameter design must follow business logic and consider boundary and exception cases, allowing free combination to derive expected results.
Testing functional points include verifying return status, return values, and exception responses for single interfaces, as well as chaining multiple interfaces where the output of one becomes the input of the next.
Executing test cases is straightforward: invoke the API.
Result verification involves checking whether the returned values match expectations and ensuring each case is executed without omissions or duplication.
API testing environment: test vs. production.
Test data: parameters and internal data, designed with business logic and edge cases.
Functional points: status, return values, exception handling, and chained interface logic.
Execution: simply call the API.
Result verification: compare actual results with expected outcomes and avoid missed or duplicate tests.
All API test cases revolve around three steps: parameter preparation, execution, and result validation, each tailored to the specific business context.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.