Fundamentals 29 min read

Understanding C++ Memory Pools: Concepts, Design, and Implementation

This article explains the problems of memory fragmentation and allocation inefficiency in C++ programs, introduces the concept of memory pools, discusses their design principles, advantages, and common implementations, and provides a complete example of a thread‑safe fixed‑size memory pool with code.

Deepin Linux
Deepin Linux
Deepin Linux
Understanding C++ Memory Pools: Concepts, Design, and Implementation

1. Introduction to C++ Memory Management

In C++ programs, frequent use of new / delete or malloc / free can cause memory fragmentation and performance degradation, similar to a balloon that keeps inflating while the program slows down.

Fragmentation occurs because allocated blocks may be larger than needed (internal fragmentation) or because many small free blocks are scattered (external fragmentation), making it hard to satisfy larger allocation requests.

2. Why Use a Memory Pool?

A memory pool (fixed‑size‑block allocation) pre‑allocates a large chunk of memory and manages smaller blocks internally, reducing fragmentation, speeding up allocation/deallocation, and allowing leak detection.

Much less internal and external fragmentation.

Allocation and release are faster than system malloc / free .

Can verify whether a pointer belongs to the pool.

Detects memory leaks by asserting on unreleased blocks.

3. Memory‑Pool Fundamentals

3.1 Basic Idea

Before any allocation, the pool reserves a contiguous region of memory and splits it into equal‑sized blocks. A free‑list (linked list) tracks unused blocks; allocation removes a block from the list, and deallocation returns it to the head of the list.

3.2 Core Algorithms

Pre‑allocate a chunk and divide it into blocks.

Maintain a head pointer to the first free block.

On allocation, pop the head; on free, push the block back.

If the pool is exhausted, allocate a new chunk and link it to the existing free‑list.

3.3 Advantages

Reduces fragmentation because all blocks have the same size.

Eliminates most system calls, improving speed.

Can automatically reclaim leaked blocks.

4. Design Considerations for a C++ Memory Pool

Should the pool grow automatically?

Is total memory usage allowed to increase only?

Are block sizes fixed or variable?

Is the pool thread‑safe?

Should memory be cleared on deallocation?

Does it need to be compatible with std::allocator ?

5. Common Implementations

Simple free‑list allocator (linked list of free blocks).

Fixed‑size allocators such as SGI __pool_alloc , Boost object_pool , ACE Free_List .

Variable‑size allocators like dlmalloc, TCMalloc, and the GNU STL allocators.

6. Example: A Thread‑Safe Fixed‑Size Memory Pool

#include <iostream>
#include <cstdlib>
#include <cassert>

// Define a memory block
struct MemoryBlock {
    MemoryBlock* next; // link to next block
};

class MemoryPool {
public:
    MemoryPool(size_t blockSize, size_t initialBlocks)
        : blockSize(blockSize), initialBlocks(initialBlocks) {
        initializePool();
    }
    ~MemoryPool() {
        MemoryBlock* cur = poolStart;
        while (cur) {
            MemoryBlock* nxt = cur->next;
            free(cur);
            cur = nxt;
        }
    }
    void* allocate() {
        if (!freeList) expandPool();
        MemoryBlock* blk = freeList;
        freeList = freeList->next;
        return blk;
    }
    void deallocate(void* ptr) {
        MemoryBlock* blk = static_cast
(ptr);
        blk->next = freeList;
        freeList = blk;
    }
private:
    void initializePool() {
        poolStart = static_cast
(malloc(blockSize * initialBlocks));
        assert(poolStart != nullptr);
        freeList = poolStart;
        MemoryBlock* cur = poolStart;
        for (size_t i = 1; i < initialBlocks; ++i) {
            cur->next = static_cast
(reinterpret_cast
(cur) + blockSize);
            cur = cur->next;
        }
        cur->next = nullptr;
    }
    void expandPool() {
        size_t newBlocks = 10;
        MemoryBlock* newBlk = static_cast
(malloc(blockSize * newBlocks));
        assert(newBlk != nullptr);
        MemoryBlock* cur = newBlk;
        for (size_t i = 1; i < newBlocks; ++i) {
            cur->next = static_cast
(reinterpret_cast
(cur) + blockSize);
            cur = cur->next;
        }
        cur->next = freeList;
        freeList = newBlk;
    }
    size_t blockSize;
    size_t initialBlocks;
    MemoryBlock* poolStart;
    MemoryBlock* freeList;
};

int main() {
    MemoryPool pool(16, 5); // 16‑byte blocks, 5 initially
    void* p1 = pool.allocate();
    void* p2 = pool.allocate();
    pool.deallocate(p1);
    pool.deallocate(p2);
    return 0;
}

The example demonstrates construction, allocation, automatic expansion, and deallocation of a fixed‑size pool, using a simple linked‑list free list and std::mutex (omitted for brevity) for thread safety.

performanceMemory ManagementC++memory poolallocator
Deepin Linux
Written by

Deepin Linux

Research areas: Windows & Linux platforms, C/C++ backend development, embedded systems and Linux kernel, etc.

0 followers
Reader feedback

How this landed with the community

login Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.