I have a program that uses way too much memory for allocating numerous small objects on heap. So I would like to investigate into ways to optimize it. The program is compile
If you know you're going to do lots of small allocations and you're worried about memory fragmentation, why not allocate a single large buffer and then map into that? You'll probably see some performance improvements as well if you're doing a lot of allocating / deallocating.
StackOverflow has some useful posts pertaining to avoiding memory fragmentation which might be of some use.
There is no exact answer, because some heap managers may use different amount of memory for sequential allocations of the same size. Also, there is (usually) no direct way to measure number of bytes the particular allocation took.
You can approximate this value by allocating a certain number of items of the same size (say, 4096) and noting the difference in memory used. Dividing the later by the former would give you the answer. Please note that this value changes from OS to OS, and from OS version to OS version and sometimes Debug builds of your application may enable extra heap tracking, which increases the overhead. On some OSes users can change heap policies (i.e. use one heap allocator vs. another as default). Example: Windows and pageheap.exe
Just FYI, the default (not LFH) heap on Windows 32-bit takes up:
Have you tried:
SomeClass* some_instance = new SomeClass;
printf("Size of SomeClass == %u", sizeof(*some_instance) );
I seem to recall that by passing in an instance of a class you should get the size that was allocated.
You're probably looking to move to a memory pool model (which is what the previous answer about "allocating a large pool" was describing. Because memory pools do not require overhead for each allocation, they offer space savings for large numbers of small objects. If the lifetime of a group of those small objects is short (i.e. you allocate a bunch of small objects then need to get rid of the lot), a memory pool is also much faster because you can just free the pool instead of each object.
Wikipedia has some info on the technique as well as a link to a few implementations:
http://en.wikipedia.org/wiki/Memory_pool
you should be able to find other implementations with a simple web search.
Under Windows you can use the Heap32First and HEAPENTRY32 structures to determine the size of any given heap entry, assuming you are not using a customised heap manager. It is also worth pointing out that the size of an allocated block is liable to be larger in debug than release builds due to guard bytes. I don't see mention of Heap64 functions in MSDN so I guess they simply use the Heap32 name.
Not for certain in a platform-independent way. I don't remember any details off-hand (except for an OS I'm pretty sure you're not using), but your OS might offer a way of testing the "size" of a malloc-ed allocation, and new might use malloc or equivalent. So you might be able to get what the memory allocator thinks of as the size of the allocation. This may or may not include any header preceding the allocation (probably not, I'd guess).
One option is to allocate a few million small objects, and look at how much memory your program is using. Then allocate a few million more and look again. The total will usually go up in chunks as the process is allocated RAM (or virtual address space) by the OS, but with a large number of objects the effect of this "rounding error" will normally tend towards 0 bytes per object.
This should tell you what you probably want to know, which is the average memory overhead of numerous small heap objects. It will include any bookkeeping overhead by the memory allocator, such as headers immediately before the allocation, or external structures to track allocations and/or blocks.