I was wondering if it practicable to have an C++ standard library compliant allocator
that uses a (fixed sized) buffer that lives on the stack.
Somehow,
Starting in c++17 it's actually quite simple to do. Full credit goes to the autor of the dumbest allocator, as that's what this is based on.
The dumbest allocator is a monotomoic bump allocator which takes a char[]
resource as its underlying storage. In the original version, that char[]
is placed on the heap via mmap
, but it's trivial to change it to point at a char[]
on the stack.
template<std::size_t Size=256>
class bumping_memory_resource {
public:
char buffer[Size];
char* _ptr;
explicit bumping_memory_resource()
: _ptr(&buffer[0]) {}
void* allocate(std::size_t size) noexcept {
auto ret = _ptr;
_ptr += size;
return ret;
}
void deallocate(void*) noexcept {}
};
This allocates Size
bytes on the stack on creation, default 256
.
template <typename T, typename Resource=bumping_memory_resource<256>>
class bumping_allocator {
Resource* _res;
public:
using value_type = T;
explicit bumping_allocator(Resource& res)
: _res(&res) {}
bumping_allocator(const bumping_allocator&) = default;
template <typename U>
bumping_allocator(const bumping_allocator<U,Resource>& other)
: bumping_allocator(other.resource()) {}
Resource& resource() const { return *_res; }
T* allocate(std::size_t n) { return static_cast<T*>(_res->allocate(sizeof(T) * n)); }
void deallocate(T* ptr, std::size_t) { _res->deallocate(ptr); }
friend bool operator==(const bumping_allocator& lhs, const bumping_allocator& rhs) {
return lhs._res == rhs._res;
}
friend bool operator!=(const bumping_allocator& lhs, const bumping_allocator& rhs) {
return lhs._res != rhs._res;
}
};
And this is the actual allocator. Note that it would be trivial to add a reset to the resource manager, letting you create a new allocator starting at the beginning of the region again. Also could implement a ring buffer, with all the usual risks thereof.
As for when you might want something like this: I use it in embedded systems. Embedded systems usually don't react well to heap fragmentation, so having the ability to use dynamic allocation that doesn't go on the heap is sometimes handy.
It's definitely possible to create a fully C++11/C++14 conforming stack allocator*. But you need to consider some of the ramifications about the implementation and the semantics of stack allocation and how they interact with standard containers.
Here's a fully C++11/C++14 conforming stack allocator (also hosted on my github):
#include <functional>
#include <memory>
template <class T, std::size_t N, class Allocator = std::allocator<T>>
class stack_allocator
{
public:
typedef typename std::allocator_traits<Allocator>::value_type value_type;
typedef typename std::allocator_traits<Allocator>::pointer pointer;
typedef typename std::allocator_traits<Allocator>::const_pointer const_pointer;
typedef typename Allocator::reference reference;
typedef typename Allocator::const_reference const_reference;
typedef typename std::allocator_traits<Allocator>::size_type size_type;
typedef typename std::allocator_traits<Allocator>::difference_type difference_type;
typedef typename std::allocator_traits<Allocator>::const_void_pointer const_void_pointer;
typedef Allocator allocator_type;
public:
explicit stack_allocator(const allocator_type& alloc = allocator_type())
: m_allocator(alloc), m_begin(nullptr), m_end(nullptr), m_stack_pointer(nullptr)
{ }
explicit stack_allocator(pointer buffer, const allocator_type& alloc = allocator_type())
: m_allocator(alloc), m_begin(buffer), m_end(buffer + N),
m_stack_pointer(buffer)
{ }
template <class U>
stack_allocator(const stack_allocator<U, N, Allocator>& other)
: m_allocator(other.m_allocator), m_begin(other.m_begin), m_end(other.m_end),
m_stack_pointer(other.m_stack_pointer)
{ }
constexpr static size_type capacity()
{
return N;
}
pointer allocate(size_type n, const_void_pointer hint = const_void_pointer())
{
if (n <= size_type(std::distance(m_stack_pointer, m_end)))
{
pointer result = m_stack_pointer;
m_stack_pointer += n;
return result;
}
return m_allocator.allocate(n, hint);
}
void deallocate(pointer p, size_type n)
{
if (pointer_to_internal_buffer(p))
{
m_stack_pointer -= n;
}
else m_allocator.deallocate(p, n);
}
size_type max_size() const noexcept
{
return m_allocator.max_size();
}
template <class U, class... Args>
void construct(U* p, Args&&... args)
{
m_allocator.construct(p, std::forward<Args>(args)...);
}
template <class U>
void destroy(U* p)
{
m_allocator.destroy(p);
}
pointer address(reference x) const noexcept
{
if (pointer_to_internal_buffer(std::addressof(x)))
{
return std::addressof(x);
}
return m_allocator.address(x);
}
const_pointer address(const_reference x) const noexcept
{
if (pointer_to_internal_buffer(std::addressof(x)))
{
return std::addressof(x);
}
return m_allocator.address(x);
}
template <class U>
struct rebind { typedef stack_allocator<U, N, allocator_type> other; };
pointer buffer() const noexcept
{
return m_begin;
}
private:
bool pointer_to_internal_buffer(const_pointer p) const
{
return (!(std::less<const_pointer>()(p, m_begin)) && (std::less<const_pointer>()(p, m_end)));
}
allocator_type m_allocator;
pointer m_begin;
pointer m_end;
pointer m_stack_pointer;
};
template <class T1, std::size_t N, class Allocator, class T2>
bool operator == (const stack_allocator<T1, N, Allocator>& lhs,
const stack_allocator<T2, N, Allocator>& rhs) noexcept
{
return lhs.buffer() == rhs.buffer();
}
template <class T1, std::size_t N, class Allocator, class T2>
bool operator != (const stack_allocator<T1, N, Allocator>& lhs,
const stack_allocator<T2, N, Allocator>& rhs) noexcept
{
return !(lhs == rhs);
}
This allocator uses a user-provided fixed-size buffer as an initial source of memory, and then falls back on a secondary allocator (std::allocator<T>
by default) when it runs out of space.
Things to consider:
Before you just go ahead and use a stack allocator, you need to consider your allocation patterns. Firstly, when using a memory buffer on the stack, you need to consider what exactly it means to allocate and deallocate memory.
The simplest method (and the method employed above) is to simply increment a stack pointer for allocations, and decrement it for deallocations. Note that this severely limits how you can use the allocator in practice. It will work fine for, say, an std::vector
(which will allocate a single contiguous memory block) if used correctly, but will not work for say, an std::map
, which will allocate and deallocate node objects in varying order.
If your stack allocator simply increments and decrements a stack pointer, then you'll get undefined behavior if your allocations and deallocations are not in LIFO order. Even an std::vector
will cause undefined behavior if it first allocates a single contiguous block from the stack, then allocates a second stack block, then deallocates the first block, which will happen every time the vector increases it's capacity to a value that is still smaller than stack_size
. This is why you'll need to reserve the stack size in advance. (But see the note below regarding Howard Hinnant's implementation.)
Which brings us to the question ...
What do you really want from a stack allocator?
Do you actually want a general purpose allocator that will allow you to allocate and deallocate memory chunks of various sizes in varying order, (like malloc
), except it draws from a pre-allocated stack buffer instead of calling sbrk
? If so, you're basically talking about implementing a general purpose allocator that maintains a free list of memory blocks somehow, only the user can provide it with a pre-existing stack buffer. This is a much more complex project. (And what should it do if it runs out space? Throw std::bad_alloc
? Fall back on the heap?)
The above implementation assumes you want an allocator that will simply use LIFO allocation patterns and fall back on another allocator if it runs out of space. This works fine for std::vector
, which will always use a single contiguous buffer that can be reserved in advance. When std::vector
needs a larger buffer, it will allocate a larger buffer, copy (or move) the elements in the smaller buffer, and then deallocate the smaller buffer. When the vector requests a larger buffer, the above stack_allocator implementation will simply fall back to a secondary allocator (which is std::allocator
by default.)
So, for example:
const static std::size_t stack_size = 4;
int buffer[stack_size];
typedef stack_allocator<int, stack_size> allocator_type;
std::vector<int, allocator_type> vec((allocator_type(buffer))); // double parenthesis here for "most vexing parse" nonsense
vec.reserve(stack_size); // attempt to reserve space for 4 elements
std::cout << vec.capacity() << std::endl;
vec.push_back(10);
vec.push_back(20);
vec.push_back(30);
vec.push_back(40);
// Assert that the vector is actually using our stack
//
assert(
std::equal(
vec.begin(),
vec.end(),
buffer,
[](const int& v1, const int& v2) {
return &v1 == &v2;
}
)
);
// Output some values in the stack, we see it is the same values we
// inserted in our vector.
//
std::cout << buffer[0] << std::endl;
std::cout << buffer[1] << std::endl;
std::cout << buffer[2] << std::endl;
std::cout << buffer[3] << std::endl;
// Attempt to push back some more values. Since our stack allocator only has
// room for 4 elements, we cannot satisfy the request for an 8 element buffer.
// So, the allocator quietly falls back on using std::allocator.
//
// Alternatively, you could modify the stack_allocator implementation
// to throw std::bad_alloc
//
vec.push_back(50);
vec.push_back(60);
vec.push_back(70);
vec.push_back(80);
// Assert that we are no longer using the stack buffer
//
assert(
!std::equal(
vec.begin(),
vec.end(),
buffer,
[](const int& v1, const int& v2) {
return &v1 == &v2;
}
)
);
// Print out all the values in our vector just to make sure
// everything is sane.
//
for (auto v : vec) std::cout << v << ", ";
std::cout << std::endl;
See: http://ideone.com/YhMZxt
Again, this works fine for vector - but you need to ask yourself what exactly you intend to do with the stack allocator. If you want a general purpose memory allocator that just happens to draw from a stack buffer, you're talking about a much more complex project. A simple stack allocator, however, which merely increments and decrements a stack pointer will work for a limited set of use cases. Note that for non-POD types, you'll need to use std::aligned_storage<T, alignof(T)>
to create the actual stack buffer.
I'd also note that unlike Howard Hinnant's implementation, the above implementation doesn't explicitly make a check that when you call deallocate()
, the pointer passed in is the last block allocated. Hinnant's implementation will simply do nothing if the pointer passed in isn't a LIFO-ordered deallocation. This will enable you to use an std::vector
without reserving in advance because the allocator will basically ignore the vector's attempt to deallocate the initial buffer. But this also blurs the semantics of the allocator a bit, and relies on behavior that is pretty specifically bound to the way std::vector
is known to work. My feeling is that we may as well simply say that passing any pointer to deallocate()
which wasn't returned via the last call to allocate()
will result in undefined behavior and leave it at that.
*Finally - the following caveat: it seems to be debatable whether or not the function that checks whether a pointer is within the boundaries of the stack buffer is even defined behavior by the standard. Order-comparing two pointers from different new
/malloc
'd buffers is arguably implementation defined behavior (even with std::less
), which perhaps makes it impossible to write a standards-conforming stack allocator implementation that falls back on heap allocation. (But in practice this won't matter unless you're running a 80286 on MS-DOS.)
** Finally (really now), it's also worth noting that the word "stack" in stack allocator is sort of overloaded to refer both to the source of memory (a fixed-size stack array) and the method of allocation (a LIFO increment/decrement stack pointer). When most programmers say they want a stack allocator, they're thinking about the former meaning without necessarily considering the semantics of the latter, and how these semantics restrict the use of such an allocator with standard containers.
This is actually an extremely useful practice and used in performant development, such as games, quite a bit. To embed memory inline on the stack or within the allocation of a class structure can be critical for speed and or management of the container.
To answer your question, it comes down to the implementation of the stl container. If the container not only instantiates but also keeps reference to your allocator as a member then you are good to go to create a fixed heap, I've found this to not always be the case as it is not part of the spec. Otherwise it becomes problematic. One solution can be to wrap the container, vector, list, etc, with another class who contains the storage. Then you can use an allocator to draw from that. This could require a lot of template magickery (tm).
Apparently, there is a conforming Stack Allocator from one Howard Hinnant.
It works by using a fixed size buffer (via a referenced arena
object) and falling back to the heap if too much space is requested.
This allocator doesn't have a default ctor, and since Howard says:
I've updated this article with a new allocator that is fully C++11 conforming.
I'd say that it is not a requirement for an allocator to have a default ctor.
It really depends on your requirements, sure if you like you can create an allocator that operates only on the stack but it would be very limited since the same stack object is not accessible from everywhere in the program as a heap object would be.
I think this article explains allocators it very well
http://www.codeguru.com/cpp/cpp/cpp_mfc/stl/article.php/c4079
A stack-based STL allocator is of such limited utility that I doubt you will find much prior art. Even the simple example you cite quickly blows up if you later decide you want to copy or lengthen the initial lstring
.
For other STL containers such as the associative ones (tree-based internally) or even vector
and deque
which use either a single or multiple contiguous blocks of RAM, the memory usage semantics quickly become unmanageable on the stack in almost any real-world usage.