Why is deleted memory unable to be reused

霸气de小男生 提交于 2019-12-06 01:37:11

问题


I am using C++ on Windows 7 with MSVC 9.0, and have also been able to test and reproduce on Windows XP SP3 with MSVC 9.0.

If I allocate 1 GB of 0.5 MB sized objects, when I delete them, everything is ok and behaves as expected. However if I allocate 1 GB of 0.25 MB sized objects when I delete them, the memory remains reserved (yellow in Address Space Monitor) and from then on will only be able to be used for allocations smaller than 0.25 MB.

This simple code will let you test both scenarios by changing which struct is typedef'd. After it has allocated and deleted the structs it will then allocate 1 GB of 1 MB char buffers to see if the char buffers will use the memory that the structs once occupied.

struct HalfMegStruct
{
    HalfMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 2048; }

    int m_Data[131071];
    HalfMegStruct* m_Next;
};

struct QuarterMegStruct
{
    QuarterMegStruct():m_Next(0){}

    /* return the number of objects needed to allocate one gig */
    static int getIterations(){ return 4096; }

    int m_Data[65535];
    QuarterMegStruct* m_Next;
};

// which struct to use
typedef QuarterMegStruct UseType;

int main()
{
    UseType* first = new UseType;
    UseType* current = first;

    for ( int i = 0; i < UseType::getIterations(); ++i )
        current = current->m_Next = new UseType;

    while ( first->m_Next )
    {
        UseType* temp = first->m_Next;
        delete first;
        first = temp;
    }

    delete first;

    for ( unsigned int i = 0; i < 1024; ++i )
        // one meg buffer, i'm aware this is a leak but its for illustrative purposes. 
        new char[ 1048576 ]; 

    return 0;
}

Below you can see my results from within Address Space Monitor. Let me stress that the only difference between these two end results is the size of the structs being allocated up to the 1 GB marker.

This seems like quite a serious problem to me, and one that many people could be suffering from and not even know it.

  • So is this by design or should this be considered a bug?
  • Can I make small deleted objects actually be free for use by larger allocations?
  • And more out of curiosity, does a Mac or a Linux machine suffer from the same problem?

回答1:


I cannot positively state this is the case, but this does look like memory fragmentation (in one of its many forms). The allocator (malloc) might be keeping buckets of different sizes to enable fast allocation, after you release the memory, instead of directly giving it back to the OS it is keeping the buckets so that later allocations of the same size can be processed from the same memory. If this is the case, the memory would be available for further allocations of the same size.

This type of optimization, is usually disabled for big objects, as it requires reserving memory even if not in use. If the threshold is somewhere between your two sizes, that would explain the behavior.

Note that while you might see this as weird, in most programs (not test, but real life) the memory usage patterns are repeated: if you asked for 100k blocks once, it more often than not is the case that you will do it again. And keeping the memory reserved can improve performance and actually reduce fragmentation that would come from all requests being granted from the same bucket.

You can, if you want to invest some time, learn how your allocator works by analyzing the behavior. Write some tests, that will acquire size X, release it, then acquire size Y and then show the memory usage. Fix the value of X and play with Y. If the requests for both sizes are granted from the same buckets, you will not have reserved/unused memory (image on the left), while when the sizes are granted from different buckets you will see the effect on the image on the right.

I don't usually code for windows, and I don't even have Windows 7, so I cannot positively state that this is the case, but it does look like it.




回答2:


I can confirm the same behaviour with g++ 4.4.0 under Windows 7, so it's not in the compiler. In fact, the program fails when getIterations() returns 3590 or more -- do you get the same cutoff? This looks like a bug in Windows system memory allocation. It's all very well for knowledgeable souls to talk about memory fragmentation, but everything got deleted here, so the observed behaviour definitely shouldn't happen.




回答3:


Using your code I performed your test and got the same result. I suspect that David Rodríguez is right in this case.

I ran the test and had the same result as you. It seems there might be this "bucket" behaviour going on.

I tried two different tests too. Instead of allocating 1GB of data using 1MB buffers I allocated the same way as the memory was first allocated after deleting. The second test I allocated the half meg buffer cleaned up then allocated the quater meg buffer, adding up to 512MB for each. Both tests had the same memory result in the end, only 512 is allocated an no large chunk of reserved memory.

As David mentions, most applications tend to make allocation of the same size. One can see quite clearly why this could be a problem though.

Perhaps the solution to this is that if you are allocating many smaller objects in this way you would be better to allocate a large block of memory and manage it yourself. Then when you're done free the large block.




回答4:


I spoke with some authorities on the subject (Greg, if you're out there, say hi ;D) and can confirm that what David is saying is basically right.

As the heap grows in the first pass of allocating ~0.25MB objects, the heap is reserving and committing memory. As the heap shrinks in the delete pass, it decommits at some pace but does not necessarily release the virtual address ranges it reserved in the allocation pass. In the last allocation pass, the 1MB allocations are bypassing the heap due to their size and thus begin to compete with the heap for VA.

Note that the heap is reserving the VA, not keeping it committed. VirtualAlloc and VirtualFree can help explain the different if you're curious. This fact doesn't solve the problem you ran into, which is that the process ran out of virtual address space.




回答5:


This is a side-effect of the Low-Fragmentation Heap.

http://msdn.microsoft.com/en-us/library/aa366750(v=vs.85).aspx

You should try disabling it to see if that helps. Run against both GetProcessHeap and the CRT heap (and any other heaps you may have created).



来源:https://stackoverflow.com/questions/5416513/why-is-deleted-memory-unable-to-be-reused

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!