Fundamentals 11 min read

Five Common Traps That Undermine Effective Unit Tests and How to Fix Them

This article explains five typical pitfalls that make unit tests ineffective—testing per function instead of behavior, chasing code‑coverage, over‑reliance on mocks, writing tests that never fail, and allowing nondeterminism—while offering practical guidance on how to avoid each issue.

FunTester
FunTester
FunTester
Five Common Traps That Undermine Effective Unit Tests and How to Fix Them

The purpose of unit testing is to ensure that a system continues to work as expected over time, guaranteeing quality and freeing developers to improve their skills and life quality.

While many tests help catch errors early, provide documentation, and support regression testing, some unit tests fail to deliver these benefits because they are overly complex, flaky, or never fail.

This article introduces five traps that render unit tests ineffective and shows how to fix them.

Write One Unit Test per Functionality, Not per Code Unit

It may seem simple to write a test for a small function such as calculate_average , but the test should verify a single behavior, e.g., test_calculate_average_return_0_for_empty_list . This encourages thinking about edge cases and results in richer documentation.

Write a unit test for each functional unit, not each code unit.

Focus on external behavior; over‑emphasizing internal implementation makes tests brittle when refactoring occurs. Only write exhaustive tests for truly complex internal logic.

Avoid Writing Tests Solely for Code Coverage

Tracking coverage is useful, but 100% coverage does not guarantee edge‑case coverage. For example, a function that divides a list may be covered but still miss the empty‑list case.

def average(elements: List[int]):
    return sum(elements) / len(elements)

def test_average_returns_average_of_list:
    result = average([1,3,5,7])
    assert result == 4

Coverage measures executed lines, not state coverage, and can lead to excessive, low‑value tests, especially for glue code. Instead, follow Martin Fowler’s advice to focus testing on risky code.

You should concentrate your testing effort on risk points. — Martin Fowler, Refactoring

Minimize Over‑Reliance on Mocks

Mocks are essential when the code under test interacts with other modules, but writing dozens of mock lines for a single function indicates the function is too complex and should be refactored.

Example of an over‑mocked test:

# custom_middleware.py
class CustomHeaderMiddleware(BaseHTTPMiddleware):
    async def dispatch(self, request, call_next):
        response = await call_next(request)
        response.headers["CustomField"] = "bla"
        return response

# test_custom_middleware.py
async def endpoint_for_test(_):
    return PlainTextResponse("Test")

middleware = [Middleware(CustomHeaderMiddleware)]
routes = [Route("/test", endpoint=endpoint_for_test)]
app = Starlette(routes=routes, middleware=middleware)

@pytest.mark.asyncio
async def test_middleware_sets_field():
    client = TestClient(app)
    response = client.get("/test")
    assert response.headers["CustomField"] == "bla"

Instead of adding more mocks, consider simplifying or refactoring the code to make it easier to test.

Avoid Writing Tests That Never Fail

Tests that cannot fail give a false sense of security. For instance, a test that only checks a mocked response will never detect changes in the production code.

def get_film(id: str):
    data = {"query": QUERY, "variables": json.dumps({"id": id})}
    response = requests.post(URL, data=data)
    return response.json()["data"]["film"]

def test_get_film_returns_successfully():
    mock_response = {"data": {"film": {"title": "a New Test", "id": "testId", "episodeID": 4}}}
    with requests_mock.Mocker() as mock:
        mock.post(URL, json=mock_response)
        result = get_film("foo")
        assert result == {"title": "a New Test", "id": "testId", "episodeID": 4}

Ask yourself which changes would cause this test to fail; if only the mock changes, the test is not useful. Prefer writing failing tests first (test‑driven development).

Eliminate Nondeterminism from Tests

Tests that depend on the current time or random data are flaky. Use tools like freeze‑gun to control time, and avoid random data generators for unit tests; hard‑coded inputs are more reliable.

Summary

These five traps can prevent you from writing effective unit tests. To avoid them:

Write tests for each functional aspect, not each function.

Don’t obsess over coverage; focus on risky code.

Minimize mock code.

Ensure your tests can fail.

Keep nondeterminism out of tests.

Doing so will make your system more stable and give you confidence to change and deploy quickly.

Have Fun ~ Tester!

code coveragesoftware testingUnit Testingmockingtest designtest reliability
FunTester
Written by

FunTester

10k followers, 1k articles | completely useless

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.