I'm a little bit confused. In the OS course we were told that all OSes take care of memory fragmentation by paging or segmentation and there is no contiguous physical memory allocation at all. OS uses different levels of addressing (logical/physical) to avoid the contiguous memory allocation. Now here there are so many discussions about it. My question is: Is this problem real in c++ programming for OSes supporting logical addressing (does any process crash just because of memory fragmentation)? if yes, why in the first place each OS tries to avoid contiguous addressing?
There are 2 layers: fragmentation in the virtual process address space and fragmentation in the physical memory.
If you look at any modern application, you can see how its memory usage grows over time as memory is not released to the OS. You can say this is caused by other things, but memory fragmentation (e.g. non-contiguous location of allocated memory chunks) is the core reason for this. In short, memory allocators refuse to release memory to the OS.
If you are interested about fragmentation in physical memroy, then even with memory organized in pages, there is still a need to allocate physically contiguous memory chunks. For example if you need to avoid virtual memory overhead, you might want to use large pages ("huge pages" in terms of Linux). x86_64 supports 4KiB, 2MiB and 1GiB pages. If there is no contiguous physical memory of the required size, you won't be able to use them.
If by OS you mean "kernel", then it cannot help you with fragmentation that happens in process address space (heap fragmentation). C library should try to avoid fragmentation, unfortunately, it is not always able to do so. See the linked question.
Memory allocator is usually not able to release large chunk of memory if there is at least something allocated in it. There is a partial solution to this that takes advantage of virtual memory organization in pages - so called "lazy-free" mechanism represented by MADV_FREE
on Linux and BSDs and DiscardVirtualMemory
on Windows. When you have a huge chunk of memory that is only partially used, you can notify the kernel that part of that memory is not needed anymore and that it can take it back under memory pressure. This is done lazily and only under memory pressure because memory deallocation is extremely expensive. But many memory allocators still do not use it for performance reasons.
So the answer to your question - it depends on how much you care about efficiency of your program. Most program do not care, as standard allocator just does the job for them. Some programs might suffer when standard allocator is not able to do its job efficiently.
OS is not avoiding contiguous memory allocation. At the top level you have hardware and software. Hardware has limited resources, physical memory in this case. To share the resource and to avoid user programs from taking care of it's sharing, virtual addressing layer was invented. It just maps contiguous virtual addressing space into sparse physical regions. In other words 0x10000 virtual address can point to 0x80000 physical address in one process and to 0xf0000 in another.
Paging and swapping means writing some pages or the whole app memory to disk and then bring it back at some point. It will most likely have different physical page mapping after it.
So, your program will always see contiguous virtual addressing space, which is really fragmented in physical hardware space. BTW, it is done with constant block sizes, and there is no waste or unused memory holes.
Now, the second level of fragmentation which is caused by the new
/malloc
functions and it is related to the fact that you allocate and delete different sizes of memory. This fragments your heap in virtual space. The functions make sure that there is as little waste as possible.
So, in your generic C++ (or any other language) programming you do not care about any of the memory fragmentation. All chunks which you allocate are guaranteed to be contiguous in virtual space (not necessarily in physical).
来源:https://stackoverflow.com/questions/52018123/memory-fragmentation-is-it-still-an-issue