Mobile Development 8 min read

Implementing UI Automation Regression for Mobile Event Tracking (埋点)

This article details a UI automation regression framework for mobile event tracking, covering background challenges, Android and iOS log collection methods, H5 integration strategies, performance comparisons, and practical recommendations to improve testing efficiency by about 50%.

转转QA
转转QA
转转QA
Implementing UI Automation Regression for Mobile Event Tracking (埋点)

Background and Purpose

Online systems contain more than 1,000 event‑tracking points (埋点) and over 300 main‑flow test cases. During feature iteration, regression of existing tracking points often requires extensive manual effort, and testing across two platforms easily leads to missed cases.

To improve the situation, an internal study explored UI‑automation regression. After implementation, efficiency increased by roughly 50%, and by continuously adding scenarios the risk of missed tests can be greatly reduced.

Assertion Method

CASE Execution Flow

How to Obtain Client‑Reported Event Data

1. Android client via adb command

def save_logcat_to_file(self, file_path, grep_str="", extra_args="", parameter="-d"):
    """
    Save logcat to the specified file and return the process.
    :param grep_str: filter string
    :param extra_args: additional arguments
    :param file_path: log storage path
    :param parameter: extra logcat parameter, default -d (output buffered logs without blocking)
    :return: log process
    """
    logcat_cmd = "shell "
    to_file_cmd = " > " + file_path
    if extra_args:
        logcat_cmd += " " + extra_args
    if grep_str:
        if config.is_windows:
            logcat_cmd += "\"logcat " + parameter + " -v time | grep '" + grep_str + "'\""
        else:
            logcat_cmd += "logcat " + parameter + " -v time | grep '" + grep_str + "'"
    else:
        logcat_cmd = "logcat -v time"
    # Execute and return process
    return self.cmd(logcat_cmd + to_file_cmd, return_proc=True)

2. iOS client via idevicesyslog (or tidevice) command

def save_device_log(device_id, device_log_path):
    """
    Start saving device log to the specified file and return the process.
    :param device_id: target device identifier
    :param device_log_path: path for the log file
    :return: log process or None
    """
    try:
        Logger().setlog(device_id + " 开始收集Case 日志", LEVEL_INFO)
        if PLATFORM_IOS in config.operating_system:
            # proc = Shell.proc("idevicesyslog -u " + device_id + " > " + device_log_path)
            proc = Shell.proc("tidevice -u " + device_id + " syslog > " + device_log_path)
        else:
            device = ADB(serialno=device_id)
            device.clear_logcat()
            proc = device.save_logcat_to_file(device_log_path)
        Logger().setlog(device_id + " 开始收集Case 日志", LEVEL_INFO)
        return proc
    except Exception:
        print("存储设备log异常")
        return None

3. H5 page via WebView capability

Because H5 integration is experimental, three solution sets were investigated.

Solution 1 (Discarded): Capture H5 logs and assert them. Reason: H5 tracking points report directly to production, making test data contamination risky; log capture cost is high.

Solution 2 (Discarded): Create a new API for H5 to asynchronously report tracking data. Reason: High resource cost, high‑frequency calls in production, potential latency and stability issues affecting business.

Solution 3 (Final): In the H5 reporting method, asynchronously invoke the client WebView capability, sharing the same implementation used by the native client and enabling tracking point assertions.

Comparison of Test Methods Before and After Implementation

Scenario

Traditional Manual Regression

UI Automation Regression

Pros & Cons After Implementation

Client Operation

Manual proxy setup, enable debug mode, repeat high‑frequency APP operations

Automated operations

Pros: greatly reduces manual repetitive work; Cons: cases must be recorded in advance

Event Reporting Verification

After APP operation, wait ~3 minutes for platform data latency, then query the analytics platform (e.g., Sensors Analytics) with SQL to check each field and timing

Automatically assert each reported log; each tracking point becomes an independent case, allowing isolated regression and quick manual spot‑check

Pros: reduces missed tests, improves efficiency by ~50%; Cons: initial random manual verification still needed

Regression Frequency & Scope

Regression of main‑flow tracking points only

Any recorded case can be covered automatically

Pros: full coverage, more regressions lead to clearer efficiency gains; Cons: some effort shifts to case entry and maintenance

Conclusion

Clarify business processes and define the automation scope.

Design the automation framework.

Identify pain points of tracking point testing and seek solutions after comparison.

Implement automated data validation and comparison.

Increase automation utilization: trigger automation on each release and schedule daily detection runs.

iOSAndroidUI AutomationMobile Testingevent trackingregression testing
转转QA
Written by

转转QA

In the era of knowledge sharing, discover 转转QA from a new perspective.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.