Automated Testing Frameworks: Design, Data Generation, UI/API Automation, and CI Integration
The article explains how automated testing—using Python‑based UI and API frameworks, configurable test‑data generation, Linux headless execution, and Jenkins continuous integration—addresses the growing complexity of software releases by reducing manual regression effort and improving defect detection speed.
1. Background As the company’s business expands, more system modules are added, leading to a rapid increase in test points and complexity. Manual regression testing consumes significant effort, especially when code changes frequently, causing functional bugs to reappear unnoticed.
Automation testing is presented as a key solution to these pain points, allowing testers to focus on new functionality while repetitive verification is handled by scripts.
2. Definition Automated testing converts human‑driven test actions into machine‑executed processes. After test cases are designed and reviewed, testers run them automatically, comparing actual results with expected outcomes, thereby saving labor, time, and resources.
3. Application Scenarios • For major version tests, even unchanged features are retested via automated regression to ensure stability. • For weekly minor releases, automated scripts provide quick feedback, reducing manual workload and catching issues early.
4. Automated Testing Details
4.1 Automation Framework Two frameworks are used: UI automation and API automation. Both follow data‑separation, layered‑logic, and data‑driven design principles, are written in Python 3, and rely on Selenium + unittest + HtmlTestRunner . The following diagrams illustrate their structures:
4.2 UI Automation Test‑Data Generation Because business processes are highly configurable, test data must be generated dynamically based on configuration items. A workflow example is shown, and the data‑generation logic uses a depth‑first search (DFS) on a directed graph to enumerate all possible paths. The core algorithm is:
while not self.__stack.isEmpty():
topNode = self.__stack.peek()
# Find the node to visit
if topNode == target:
path = self.__stack.getItems()
self.__wfPath.append(path)
print(path)
adjvexNode = self.__stack.pop()
self.changeStatus(self.__stack.peek(), adjvexNode, False)
else:
# Visit adjacent nodes of topNode
nextNode = self.getNextNode(topNode, adjvexNode)
if nextNode != -1:
self.__stack.push(nextNode)
self.changeStatus(topNode, nextNode, True)
adjvexNode = -1
else:
adjvexNode = self.__stack.pop()
self.changeStatus(self.__stack.peek(), adjvexNode, False)Filter conditions stored as SQL are parsed with sqlparse to extract and normalize predicates before comparison.
filterList = list()
res = sqlparse.parse(sql)
for token in res[0].tokens:
if type(token) == Where:
statementList = list()
m = iter(range(len(token.tokens)))
for x in m:
subToken = token.tokens[x]
if type(subToken) == Parenthesis:
tokenValue = subToken.value[1:-1]
if "and" in tokenValue:
statementList.extend(tokenValue.split("and"))
elif "or" in tokenValue:
statementList.append(tokenValue.split("or")[0])
else:
statementList.append(tokenValue)
elif type(subToken) == Comparison:
statementList.append(subToken.value)
elif subToken.value in ('in','not in','=', '!=','<>','not','>','<','like','not like'):
statementList.append(token.tokens[x-2].value + token.tokens[x-1].value + token.tokens[x].value + token.tokens[x+1].value + token.tokens[x+2].value)
m.__next__()
m.__next__()4.3 Linux Execution of UI Automation To run UI tests on Linux, Chrome is launched in headless mode and pyvirtualdisplay provides a virtual X server:
options.add_argument('disable-infobars')
if headlessFlag:
options.add_argument("--headless")
options.add_argument('--no-sandbox')
options.add_argument("--disable-gpu")
self.driver = webdriver.Chrome(options=options) from pyvirtualdisplay import Display
if sysstr != "Windows" and not headlessFlag:
display = Display(size=(1920,1080), use_xauth=True)
display.start()4.4 API Automation HTTP Requests API tests are implemented with the requests library, supporting GET and various POST payload formats (form‑data, JSON):
response = requests.get(self.url, headers=headers if headers else self.headers, params=params if params else self.params, timeout=float(self.timeout))
response = requests.post(self.misUrl+url, headers=headers if headers else self.headers, params=params if params else self.params, data=data if data else self.data, timeout=float(self.timeout))
response = requests.post(self.url, headers=headers if headers else self.headers, json=self.data, timeout=float(self.timeout))4.5 Jenkins Continuous Integration Jenkins is configured to trigger daily smoke tests for development builds and weekly regression tests for patch packages. Test reports are generated automatically, showing error screenshots and failure messages to aid debugging.
5. Summary and Outlook The article demonstrated how to implement automated testing for highly configurable business scenarios, covering dynamic test‑data generation, Linux UI execution, and API automation utilities. Future work aims to increase test coverage, improve robustness, and integrate mobile API tests with web UI automation for more realistic end‑to‑end validation.
Zhengtong Technical Team
How do 700+ nationwide projects deliver quality service? What inspiring stories lie behind dozens of product lines? Where is the efficient solution for tens of thousands of customer needs each year? This is Zhengtong Digital's technical practice sharing—a bridge connecting engineers and customers!
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.