Backend Development 7 min read

Designing a Multi-Project API Automation Framework with Python and Pytest

This guide explains how to build a modular, multi‑project API automation framework in Python, covering directory layout, configuration handling, HTTP client encapsulation, pytest‑based test cases, a shell script for execution, and best‑practice recommendations for logging and reporting.

Test Development Learning Exchange
Test Development Learning Exchange
Test Development Learning Exchange
Designing a Multi-Project API Automation Framework with Python and Pytest

Introduction: constructing a multi‑project API automation framework involves careful planning of project organization, configuration management, test case structuring, logging, and report generation.

1. Directory Structure

The framework follows a clear hierarchy to support scalability and maintainability:

api_automation_framework/
│   ├── config/                # configuration files
│   │   ├── settings.py        # global settings
│   │   └── environments.yaml  # environment‑specific configs
│   ├── logs/                  # log files
│   ├── reports/               # test reports
│   ├── tests/                 # test cases
│   │   ├── __init__.py
│   │   ├── conftest.py        # pytest fixtures and hooks
│   │   ├── test_project_a/    # tests for Project A
│   │   │   ├── __init__.py
│   │   │   └── test_api.py
│   │   ├── test_project_b/    # tests for Project B
│   │   │   ├── __init__.py
│   │   │   └── test_api.py
│   │   └── ...
│   ├── utils/                 # utility modules
│   │   ├── __init__.py
│   │   ├── http_client.py     # wrapped HTTP client
│   │   ├── data_generator.py  # data generators
│   │   └── helpers.py         # helper functions
│   ├── requirements.txt       # dependency list
│   └── run_tests.sh           # script to execute tests

2. Configuration Management (config/settings.py)

# config/settings.py
import os
from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
# Default configuration
DEFAULT_LOG_LEVEL = 'INFO'
DEFAULT_TIMEOUT = 30  # seconds
# Environment selection
ENVIRONMENT = os.getenv('ENV', 'development')
if ENVIRONMENT == 'production':
    from .environments_production import *
elif ENVIRONMENT == 'testing':
    from .environments_testing import *
else:
    from .environments_development import *

3. HTTP Client Wrapper (utils/http_client.py)

# utils/http_client.py
import requests
import logging
from requests.exceptions import RequestException

class HttpClient:
    def __init__(self, base_url, headers=None):
        self.base_url = base_url.rstrip('/')
        self.headers = headers or {}
        self.session = requests.Session()
        self.session.headers.update(self.headers)

    def get(self, endpoint, params=None, **kwargs):
        return self._request('GET', endpoint, params=params, **kwargs)

    def post(self, endpoint, data=None, json=None, **kwargs):
        return self._request('POST', endpoint, data=data, json=json, **kwargs)

    def _request(self, method, endpoint, **kwargs):
        url = f"{self.base_url}/{endpoint.lstrip('/') }"
        try:
            response = self.session.request(method, url, **kwargs)
            response.raise_for_status()
            logging.info(f"{method} {url} - Response Status: {response.status_code}")
            return response
        except RequestException as e:
            logging.error(f"Error during {method} request to {url}: {e}")
            return None

    def close(self):
        self.session.close()

4. Test Cases (tests/test_project_a/test_api.py)

# tests/test_project_a/test_api.py
import pytest
from utils.http_client import HttpClient

@pytest.fixture(scope='module')
def client():
    base_url = "https://api.project-a.com"
    headers = {'Authorization': 'Bearer YOUR_ACCESS_TOKEN'}
    return HttpClient(base_url, headers)

def test_get_user_info(client):
    response = client.get('/user/123')
    assert response.status_code == 200
    user_data = response.json()
    assert 'id' in user_data and user_data['id'] == 123

@pytest.mark.parametrize("payload", [
    {"name": "Alice", "email": "[email protected]"},
    {"name": "Bob", "email": "[email protected]"}
])
def test_create_user(client, payload):
    response = client.post('/users', json=payload)
    assert response.status_code == 201
    created_user = response.json()
    assert created_user['name'] == payload['name']

5. Running Tests (run_tests.sh)

#!/bin/bash
# Optional environment variable
export ENV=testing
# Install dependencies
pip install -r requirements.txt
# Execute tests and generate HTML report
pytest --html=reports/report.html

Diagram: The following ASCII diagram illustrates the relationships among the framework components, showing how configuration, logs, reports, tests, utilities, and execution scripts are organized under the top‑level api_automation_framework directory.

+-----------------------------------+
| api_automation_framework          |
|                                   |
| +-----------------------------+  |
| | config/                      |  |
| | +------------------------+  |  |
| | | settings.py             |  |  |
| | +------------------------+  |  |
| | | environments.yaml       |  |  |
| | +------------------------+  |  |
| +-----------------------------+  |
|                                   |
| +-----------------------------+  |
| | logs/                        |  |
| +-----------------------------+  |
|                                   |
| +-----------------------------+  |
| | reports/                     |  |
| +-----------------------------+  |
|                                   |
| +-----------------------------+  |
| | tests/                       |  |
| | +------------------------+  |  |
| | | test_project_a/         |  |  |
| | | +---------------------+  |  |  |
| | | | test_api.py          |  |  |  |
| | | +---------------------+  |  |  |
| | | ...                    |  |  |
| | +------------------------+  |  |
| | | test_project_b/         |  |  |
| +-----------------------------+  |
|                                   |
| +-----------------------------+  |
| | utils/                       |  |
| | +------------------------+  |  |
| | | http_client.py          |  |  |
| | +------------------------+  |  |
| | | data_generator.py       |  |  |
| | +------------------------+  |  |
| | | helpers.py              |  |  |
| +-----------------------------+  |
|                                   |
| +-----------------------------+  |
| | requirements.txt            |  |
+-----------------------------+  |
|                                   |
+-----------------------------+  |
| | run_tests.sh                 |  |
+-----------------------------+  |
+-----------------------------------+

Best‑Practice Summary

Modularization: each functionality is encapsulated in its own module, enhancing reuse and maintainability.

Configuration Management: separate configuration files and environment variables to handle different deployment contexts.

Test Organization: group test cases by project or feature to keep a clean codebase.

Logging: use the logging module to capture detailed runtime information for debugging.

Report Generation: integrate pytest‑html to automatically produce comprehensive HTML reports.

Continuous Improvement: regularly review framework performance and evolve it according to project needs and technological advances.

backendPythonautomationframeworkAPI Testingpytest
Test Development Learning Exchange
Written by

Test Development Learning Exchange

Test Development Learning Exchange

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.