Design and Implementation of an Automated Anti‑Cheat Testing Platform at 58.com
This article describes the background, requirements, architecture, and key components of the anti‑cheat automated testing platform built for 58.com, explaining how rule management, test case generation, execution engine, and reporting together improve testing efficiency, reduce manual effort, and ensure reliable fraud detection in the advertising system.
Background – In the mobile era, fraudulent traffic harms advertisers and consumes resources of commercial ad systems; strict anti‑cheat measures can also affect revenue, prompting the development of various anti‑cheat techniques.
Business Features – Anti‑cheat filters fake traffic by identifying cheat characteristics through algorithms and user‑behavior analysis, then matching extracted features against known cheat signatures.
Requirement Characteristics – High dependency on pre‑data, pure API testing, frequent urgent iterations, and rule‑centric changes (add, modify, delete).
Requirement Classification – Includes rule‑configuration addition, rule‑logic addition, rule‑logic modification, interface protocol changes, and platform capability changes.
Test Plan – Five test categories are defined; configuration‑based rules require only validation, while logic‑changing rules use interface testing and regression of existing cases. Diagrams illustrate the design classification.
Anti‑Cheat Testing Platform Overview
1. Platform Goals
Business: automate request saving, pre‑data handling, and cleanup; generate test cases from rule configurations; provide QA reports; enable one‑click regression and reporting; record operation data for post‑mortem analysis.
Developer: use extensible design patterns, provide developer documentation.
Platform: support visual configuration of test case parameters and result visualization.
2. Architecture
Key modules:
User Management – Handles user status, permissions (QA, RD, admin), and access control.
Rule Management – Supports add, query, logical delete of rules; distinguishes supported, pending‑case, and unsupported rules.
Rule Configuration Validation – Compares test‑version and production‑version rule files, highlights changes, and updates rule status accordingly.
Test Case Management – Provides CRUD for cases, automatic case generation from rule configurations (positive and negative cases), and manual entry for unsupported rules.
Full‑Case Regression & Execution APIs – Offer service‑level, billing‑type, and rule‑batch regression, as well as single‑case execution.
Execution Engine – Executes cases through pre‑processing, API call, verification, and data cleanup, with a rollback mechanism to isolate failures.
Test Reporting – Generates overall and detailed reports; overall pass rate must reach 100% before release.
Execution Engine Details
Pre‑processor – handles mock data, blacklist data, and parameter customization.
Interface Adapter – routes calls based on service.
Verification & Statistics – validates responses and produces reports.
Data Cleanup – removes pre‑data and mock data; supports retry with failure logging.
Results – QA labor cost reduced from days per person to under one hour per person.
Future Plans
Automatic dependency package injection to further improve efficiency.
Full automatic case generation.
Support for dependency‑type scenarios.
Author: Liu Lingfeng, Senior Test Engineer at 58.com, responsible for commercial anti‑cheat testing since July 2018.
58 Tech
Official tech channel of 58, a platform for tech innovation, sharing, and communication.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.