Artificial Intelligence 9 min read

Intelligent Testing: A Scenario‑Driven Approach to Scaling AI‑Enabled Test Phases

The article presents a scenario‑driven framework that scales Baidu’s three‑stage intelligent testing by applying AI‑augmented techniques to each of the five testing steps—input, execution, analysis, localization and evaluation—thereby boosting coverage, efficiency, precision, rapid root‑cause identification and real‑time risk assessment for high‑quality software testing.

Baidu Geek Talk
Baidu Geek Talk
Baidu Geek Talk
Intelligent Testing: A Scenario‑Driven Approach to Scaling AI‑Enabled Test Phases

Abstract: The previous article introduced the three stages of Baidu intelligent testing. This article proposes a scenario‑driven method to orderly advance the three stages of intelligent testing at scale.

Testing activities can be divided into five steps: test input, test execution, test analysis, test localization, and test evaluation. Because the goals of each step differ, applying intelligence uniformly to the entire testing process can lead to confusion and hinder practical implementation.

The proposed approach treats a specific test activity within a service as a pilot scenario, forming a point‑line‑plane‑body progression (illustrated in the accompanying diagram).

01. Intelligent Exploration in the Test Input Stage

Test input aims to identify a comprehensive and accurate set of test behaviors, data, and realistic environments to cover more scenarios and achieve higher code coverage. Traditionally, input selection relies heavily on experience and historical cases. AI‑enabled techniques such as smart anomaly case generation, massive query search for optimal queries, page‑traversal recommendation actions, and fuzzing of function or API parameters can further enrich test inputs and improve recall.

In scenarios like traffic expansion and initial filtering, expanding coverage while using feature bucketing, instrumentation, and deduplication helps filter out ineffective features. For abnormal scenarios, control‑flow graphs and data modeling guide mutation strategies to generate anomalous cases, balancing completeness and execution efficiency.

02. Intelligent Exploration in the Test Execution Stage

The execution stage focuses on efficiently running the selected test inputs with minimal cost while maintaining problem‑discovery capability. Traditional practice executes all determined test cases without optimization, leading to redundancy, high resource consumption, and low efficiency.

Intelligent testing selects, deduplicates, balances groups, and schedules resources for test sets. It employs static evaluation (scanning test case content), dynamic evaluation (executing cases and analyzing states), mutation testing (injecting source‑code anomalies), and flaky detection to determine which cases to run. Strategies such as smart cancellation, skipping, trimming, sorting, and combining ensure that the most valuable cases are executed in the shortest time.

03. Intelligent Exploration in the Test Analysis Stage

This stage analyzes execution results to determine whether problems exist, aiming for high precision and recall. Traditionally, experts set thresholds and metrics, which can be brittle as systems evolve.

Intelligent analysis leverages historical execution data: exponential smoothing for data size and row count fluctuations, threshold range modeling for business metrics, visual techniques to assess front‑end screenshots, and DTW curve fitting to detect memory leaks, reducing reliance on manual judgment.

04. Intelligent Exploration in the Test Localization Stage

Localization quickly identifies the root cause of test failures to enable rapid remediation. Conventional methods involve manual investigation of tools, environments, and code.

Intelligent approaches use decision‑tree based failure localization to automatically rebuild or self‑heal tool failures, and employ change‑wall and online system knowledge graphs to pinpoint monitoring issues, assisting developers and operations in swift damage control.

05. Intelligent Exploration in the Test Evaluation Stage

Evaluation assesses overall system risk by combining test execution outcomes with system changes. A quality‑risk model is built from full‑process test data, selecting relevant risk factors, collecting and transforming them into features, and training machine learning models to quantify risk. The model provides real‑time risk conclusions for new projects and creates a feedback loop where accumulated project data continuously refines the model.

Overall, the scenario‑driven, AI‑augmented methodology enables scalable, efficient, and high‑quality software testing across all phases.

software qualitytest automationAI testingintelligent testingtest management
Baidu Geek Talk
Written by

Baidu Geek Talk

Follow us to discover more Baidu tech insights.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.