mmap problem, allocates huge amounts of memory

前端 未结 8 745
野趣味
野趣味 2020-12-23 18:17

I got some huge files I need to parse, and people have been recommending mmap because this should avoid having to allocate the entire file in-memory.

But looking at

相关标签:
8条回答
  • 2020-12-23 18:28

    "allocate the whole file in memory" conflates two issues. One is how much virtual memory you allocate; the other is which parts of the file are read from disk into memory. Here you are allocating enough space to contain the whole file. However, only the pages that you touch will actually be changed on disk. And, they will be changed correctly no matter what happens with the process, once you have updated the bytes in the memory that mmap allocated for you. You can allocate less memory by mapping only a section of the file at a time by using the "size" and "offset" parameters of mmap. Then you have to manage a window into the file yourself by mapping and unmapping, perhaps moving the window through the file. Allocating a big chunk of memory takes appreciable time. This can introduce an unexpected delay into the application. If your process is already memory-intensive, the virtual memory may have become fragmented and it may be impossible to find a big enough chunk for a large file at the time you ask. It may therefore necessary to try to do the mapping as early as possible, or to use some strategy to keep a large enough chunk of memory available until you need it.

    However, seeing as you specify that you need to parse the file, why not avoid this entirely by organizing your parser to operate on a stream of data? Then the most you will need is some look-ahead and some history, instead of needing to map discrete chunks of the file into memory.

    0 讨论(0)
  • 2020-12-23 18:29

    No, what you're doing is mapping the file into memory. This is different to actually reading the file into memory.

    Were you to read it in, you would have to transfer the entire contents into memory. By mapping it, you let the operating system handle it. If you attempt to read or write to a location in that memory area, the OS will load the relevant section for you first. It will not load the entire file unless the entire file is needed.

    That is where you get your performance gain. If you map the entire file but only change one byte then unmap it, you'll find that there's not much disk I/O at all.

    Of course, if you touch every byte in the file, then yes, it will all be loaded at some point but not necessarily in physical RAM all at once. But that's the case even if you load the entire file up front. The OS will swap out parts of your data if there's not enough physical memory to contain it all, along with that of the other processes in the system.

    The main advantages of memory mapping are:

    • you defer reading the file sections until they're needed (and, if they're never needed, they don't get loaded). So there's no big upfront cost as you load the entire file. It amortises the cost of loading.
    • The writes are automated, you don't have to write out every byte. Just close it and the OS will write out the changed sections. I think this also happens when the memory is swapped out as well (in low physical memory situations), since your buffer is simply a window onto the file.

    Keep in mind that there is most likely a disconnect between your address space usage and your physical memory usage. You can allocate an address space of 4G (ideally, though there may be OS, BIOS or hardware limitations) in a 32-bit machine with only 1G of RAM. The OS handles the paging to and from disk.

    And to answer your further request for clarification:

    Just to clarify. So If I need the entire file, mmap will actually load the entire file?

    Yes, but it may not be in physical memory all at once. The OS will swap out bits back to the filesystem in order to bring in new bits.

    But it will also do that if you've read the entire file in manually. The difference between those two situations is as follows.

    With the file read into memory manually, the OS will swap parts of your address space (may include the data or may not) out to the swap file. And you will need to manually rewrite the file when your finished with it.

    With memory mapping, you have effectively told it to use the original file as an extra swap area for that file/memory only. And, when data is written to that swap area, it affects the actual file immediately. So no having to manually rewrite anything when you're done and no affecting the normal swap (usually).

    It really is just a window to the file:

                            memory mapped file image

    0 讨论(0)
  • 2020-12-23 18:30

    You need to specify a size smaller than the total size of the file in the mmap call, if you don't want the entire file mapped into memory at once. Using the offset parameter, and a smaller size, you can map in "windows" of the larger file, one piece at a time.

    If your parsing is a single pass through the file, with minimal lookback or look-forward, then you won't actually gain anything by using mmap instead of standard library buffered I/O. In the example you gave of counting the newlines in the file, it'd be just as fast to do that with fread(). I assume that your actual parsing is more complex, though.

    If you need to read from more than one part of the file at a time, you'll have to manage multiple mmap regions, which can quickly get complicated.

    0 讨论(0)
  • 2020-12-23 18:33

    A little off topic.

    I don't quite agree with Mark's answer. Actually mmap is faster than fread.

    Despite of taking advantage of the system's disk buffer, fread also has an internal buffer, and in addition, the data will be copied to the user-supplied buffer as it is called.

    On the contrary, mmap just return a pointer to the system's buffer. So there is a two-memory-copies-saving.

    But using mmap a little dangerous. You must make sure the pointer never goes out of the file, or there will be a segment fault. While in this case fread merely returns zero.

    0 讨论(0)
  • 2020-12-23 18:45

    The system will certainly try to put all your data in physical memory. What you will conserve is swap.

    0 讨论(0)
  • 2020-12-23 18:47

    You can also use fadvise(2) (and madvise(2), see also posix_fadvise & posix_madvise ) to mark mmaped file (or its parts) as read-once.

    #include <sys/mman.h> 
    
    int madvise(void *start, size_t length, int advice);
    

    The advice is indicated in the advice parameter which can be

    MADV_SEQUENTIAL 
    

    Expect page references in sequential order. (Hence, pages in the given range can be aggressively read ahead, and may be freed soon after they are accessed.)

    Portability: posix_madvise and posix_fadvise is part of ADVANCED REALTIME option of IEEE Std 1003.1, 2004. And constants will be POSIX_MADV_SEQUENTIAL and POSIX_FADV_SEQUENTIAL.

    0 讨论(0)
提交回复
热议问题