Fundamentals 10 min read

Practical Experience and Insights on Functional Testing for Complex CRM Systems

This article shares practical experiences and reflections on functional testing for a commercial advertising CRM system, covering demand analysis, test case creation, execution stages, and tips for avoiding missed tests in complex, multi‑subsystem projects.

360 Quality & Efficiency
360 Quality & Efficiency
360 Quality & Efficiency
Practical Experience and Insights on Functional Testing for Complex CRM Systems

Functional testing may seem simple, but doing it well is far from easy. The author tests a commercial advertising CRM system comprising 18 subsystems with long, complex processes and intricate logic, making thorough testing before release challenging; fortunately, no major missed tests have occurred so far. The article shares personal experiences and thoughts on functional testing.

After receiving a requirement, the work is divided into three parts: requirement analysis, test case creation, and test execution. A crucial prerequisite is being well‑familiar with the system under test. The following sections discuss each part in detail.

Requirement Analysis + Design Analysis

When a requirement arrives, the first step is requirement analysis. Many overlook this, treating it as a product‑only task and merely translating the requirement document into test cases, blaming the product for any gaps. In the author’s experience, thorough requirement analysis is essential—"sharpening the axe before chopping wood"—to produce meaningful, correct test cases, a practice the author calls “requirement testing”.

The first step of requirement testing is understanding the purpose behind the requirement. Although it may seem the product decides what to build, testers need to grasp the motivation for several reasons:

Some requirements are one‑off; identifying alternative solutions can avoid unnecessary work.

Understanding the target role (business user vs. admin) helps testers adopt the appropriate perspective.

Clarifying ambiguous requirements prevents writing incorrect or superficial test cases.

Determining test granularity aids accurate effort estimation (e.g., detailed testing for business‑user uploads versus basic checks for admin uploads).

Mapping the overall requirement flow from documents and prototypes ensures a complete mental model, naturally leading to test cases.

The second step of requirement analysis focuses on how business functions are implemented, typically split into two parts:

1. Page implementation – detailing each module and field, often tracing values back to database tables and columns, sometimes requiring input from developers after detailed design.

2. Functional implementation – describing what each page does, enumerating business scenarios, and identifying related database tables, fields, and data flow.

In many projects, design reviews are missing, leading to fragmented development and painful integration. The author mitigates this by sitting with developers to verify implementation plans, configurations, and data flow, which often uncovers logic bugs early and reduces fix costs.

Test Cases

Traditional test case documents are declining; most teams now capture test points in mind‑maps. The author’s test cases cover four aspects:

UI – testing interface elements and common UI scenarios.

Business functional points – breaking down all business scenarios, covering both positive and negative paths to hit all code branches.

Database – covering relevant tables and fields.

Configuration files – ensuring familiarity for troubleshooting and confirming no missing configs are deployed.

These four aspects mirror the earlier requirement and design analyses; once analysis is complete, test cases naturally emerge. Common omissions include special characters (e.g., double quotes in advertiser names) that can cause platform failures.

Test Execution

The author outlines a four‑stage testing process, acknowledging that individual approaches may vary:

Smoke testing – run core positive‑path cases first; avoid diving straight into detailed tests.

Detailed testing – execute all test cases for a module before updating code; the author updates code each morning and after finishing a module.

Second‑round testing – regression testing of bugs found in the first round, plus retesting related functionality.

Final regression – retest bugs from the third round (rare) and perform a comprehensive retest of all features.

The author invites fellow testers to share comments and discuss these experiences.

Qtest is the professional testing team under 360, pioneering platform‑wide testing automation and efficiency.

test case designfunctional testingCRMtesting processsoftware QA
360 Quality & Efficiency
Written by

360 Quality & Efficiency

360 Quality & Efficiency focuses on seamlessly integrating quality and efficiency in R&D, sharing 360’s internal best practices with industry peers to foster collaboration among Chinese enterprises and drive greater efficiency value.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.