How to avoid heap fragmentation?

前端 未结 9 1336
一个人的身影
一个人的身影 2020-12-05 08:14

I\'m currently working on a project for medical image processing, that needs a huge amount of memory. Is there anything I can do to avoid heap fragmentation and to speed up

相关标签:
9条回答
  • 2020-12-05 08:59

    Without much more information about the problem (for example language), one thing you can do is to avoid allocation churn by reusing allocations and not allocate, operate and free. Allocator such as dlmalloc handles fragmentation better than Win32 heaps.

    0 讨论(0)
  • 2020-12-05 09:01

    You might need to implement manual memory management. Is the image data long lived? If not, then you can use the pattern used by apache web server: allocate large amounts of memory and wrap them into memory pools. Pass those pools as the last argument in functions, so they can use the pool to satisfy the need to allocate temporary memory. Once the call chain is finished, all the memory in the pool can should be no longer used, so you can scrub the memory area and used it again. Allocations are fast, since they only mean adding a value to a pointer. Deallocation is really fast, since you will free very large blocks of memory at once.

    If your application is multithreaded, you might need to store the pool in thread local storage, to avoid cross-thread communication overhead.

    0 讨论(0)
  • 2020-12-05 09:02

    I gues you're using something unmanaged, because in managed platforms the system (garbage collector) takes care of fragmentation.

    For C/C++ you can use some other allocator, than the default one. (there were alrady some threads about allocators on stackowerflow).

    Also, you can create your own data storage. For example, in the project I'm currently working on, we have a custom storage (pool) for bitmaps (we store them in a large contigous hunk of memory), because we have a lot of them, and we keep track of heap fragmentation and defragment it when the fragmentation is to big.

    0 讨论(0)
提交回复
热议问题