Automated Testing and Monitoring Solution for DSP Advertising Business
The article outlines a comprehensive automated testing framework for a DSP advertising platform, covering income, interface, and log layers, and detailing the use of protobuf, Pytest, Logstash, ElasticSearch, Jenkins, and Allure to achieve efficient, real‑time quality assurance and continuous integration.
After becoming familiar with the existing advertising business, the author identifies testing pain points and proposes targeted automation to improve test efficiency and quality assurance.
Typical efficiency opportunities include frequently regressed main flows, time‑consuming manual test cases, frequent test data construction, and small auxiliary tools for business partners.
The author works in an advertising DSP that handles billions of traffic requests daily, returning ads to meet demand.
The article focuses on why automation is needed in this ad business and how it is implemented.
The simplified data flow involves three parties: the media side (SSP), the ad exchange (ADX), and the demand‑side platform (DSP).
DSP processes interface requests, and the testing focus includes income, interface, and log layers.
Income layer: Revenue is the ultimate goal; any business logic can affect income, requiring comprehensive coverage and real‑time monitoring of logic, revenue, and key metrics.
Interface layer: DSP and ADX communicate via Google protobuf, a high‑efficiency data format; the proto defines about 50 request and response fields, leading to a combinatorial explosion of test cases across mobile DSP scenarios (http/https, app/wap, single/multiple images, size combinations).
Log layer: Advertising data analysis and revenue settlement rely on server logs; manual verification is inefficient due to the large number of log fields.
Therefore, only a combination of automation and online monitoring can adequately ensure business quality.
The proposed automation testing solution architecture (illustrated in the original diagram) includes several key technologies and frameworks.
1. Request client module: Handles protobuf request construction, sending, response parsing, and log retrieval via ElasticSearch API.
2. Test case and data organization: Implemented with the Python testing framework Pytest, chosen for its advantages over unittest and Nose.
3. Log collection: Uses Logstash to ship logs to ElasticSearch, enabling real‑time log ingestion and querying.
Logstash configuration consists of three parts:
Input: defines file paths to monitor.
Filter: processes data, extracting required fields.
Output: directs logs to destinations such as files, consoles, message queues, or ElasticSearch.
ElasticSearch provides a RESTful API for storing and querying the collected logs.
4. Data visualization (Kibana): Although not used for automated verification, Kibana is mentioned as the typical ELK stack visualization tool.
5. Continuous integration and reporting: Jenkins triggers the test pipeline, and Allure generates test reports, enabling daily and nightly builds.
The automation enables full‑coverage verification of all parameter combinations and, combined with Jenkins, supports continuous integration to maintain functional stability.
Readers are invited to discuss and apply the presented techniques to their own business contexts.
360 Quality & Efficiency
360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.