Custom C++ Memory Pool for Fast Allocation from Heap

Today I will present custom memory pool design which can cut significant allocation time since performance is the greatest concern in C++ programs. The idea is pre-allocating a large block and giving fixed-size pieces to consumers later. This especially comes handy when your objects are of same size. Stay tuned for the details 😎

Memory Manager Design

The idea is simple. First, allocate space to accomodate N objects. Second, allocate them to consumers in order. Last, use a stack and push the deallocated blocks there for fast retrieval of empty blocks next time.

MemManager is a template class so that you can use it well-tailored for your class.

In constructor, a fixed size block is reserved from heap memory. It is freed later in the destructor.

Note that m_maxAllocatedSlot represents the number of slots that are ever reserved. It starts with 0 and increases but never goes back. Check line 36 for reservation of a block for use. It reserves a block for the first time. When a block is returned as of line 49, it is pushed on top of the stack. When an allocation is requested next time, line 33 gives the recently deallocated line.

Don’t worry, those details will be clearer later.

Test Drive

Let’s create a class that has no use other than keeping an integer.

Take a look at it. There are two special methods: operator new(...)  and operator delete(...)  for ease of use with our custom allocator class. They both use the provided allocator for finding a space.

Now the real test takes place below.

To visualize the behavior, I first allocated 4 elements, freed first 2 elements, then reallocated them back. In the end I deallocated everything. Please check the output created by this fuction below.


For performance comparison between custom allocated pool and system managed pool, I allocated 10M elements and released them. Check the table below.

Dynamic Allocation Times(s) Custom Allocation Times(s)
7.869 5.182
7.943 4.997
7.813 5.028
~7.875 ~5.069

On average 36% time is saved for this simple use case. This would vary depending on the usage pattern especially when there is huge fragmentation in memory. Use your RAM wisely 😉