Intelligent Test Evaluation and Risk Assessment in Software Quality Assurance
The article describes an intelligent test‑evaluation framework that gathers performance data, quantifies project, personnel, and code risk dimensions, feeds them into rule‑based and logistic‑regression models to produce risk scores and risk‑driven testing plans, and demonstrates how this approach identified thousands of high‑risk projects, prevented hundreds of bugs, and saved thousands of person‑days.
The article continues a series on intelligent testing, focusing on the "test evaluation" stage. Test evaluation collects performance data generated by quality‑assurance activities, applies strategies and algorithms to estimate residual quality risk, and decides whether additional quality activities are needed before a project goes live.
Risk introduction occurs during development when risky code is written. By analyzing who writes what code under which circumstances, the authors identify dimensions of risk such as project risk (duration, module count, change count), personnel risk (bug‑per‑KLOC, familiarity, test‑return count), and code risk (change lines, function complexity, impact on interfaces, UI, user density). These dimensions are structured and fed into models or rule‑based systems to estimate the probability and impact of risk, guiding QA involvement levels.
Risk admission assessment combines risk‑dimension data, decision policies, and task types to determine the appropriate quality‑activity coverage (full automation, developer self‑test, QA intervention, etc.). The process builds a data‑mining pipeline that links project, personnel, and code risk portraits to quantitative risk scores, then generates risk‑driven testing plans.
The article further describes multi‑dimensional activity data mining, covering white‑box coverage (statement, branch, function), log coverage (exception logs), business‑request coverage (knowledge‑graph of request paths), and simulation fidelity (environment and traffic simulation). These metrics are collected during test execution to verify whether the earlier risk admission assessment has been satisfied.
Finally, a model‑based risk assessment is introduced. Historical risk‑dimension and activity data are used to train a logistic‑regression model that predicts final project risk. The model outputs both probability and impact, which are combined via a risk matrix to produce actionable decisions (e.g., block release, add tests, or proceed). Reported results show the system identified over 1,000 high‑risk projects, intercepted 300+ bugs, and saved more than 2,000 person‑days by automating risk‑driven testing.
Baidu Geek Talk
Follow us to discover more Baidu tech insights.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.