Suppose I have a dataset that is an array of 1e12 32-bit ints (4 TB) stored in a file on a 4TB HDD ext4 filesystem..
Consider that the data is most likely random (or at
I'd say performance should be similar if access is truly random. The OS will use a similar caching strategy whether the data page is mapped from a file or the file data is simply cached without an association to RAM.
Assuming cache is ineffective:
fadvise
to declare your access pattern in advance and disable readahead.So I'd go with explicit reads.
Seek performance highly depends on your file system implementation. Ext4 should be a good choice as it uses extent trees. Also if your file has linear contiguous allocation the extent tree will consist of a single entry, which makes seek trivially efficient.
On the one hand, you have extensive use of memory swap resulting in minor pagefaults, transparent for the applicative. On the other one, you have numerous system calls, with the known overhead. The Wikipedia page about memory-mapped file seems to be quite clear to me, it browses in an comprehensive manner pros and cons.
I think 64bit architecture + large file call for a memory-mapped file approach, at least to keep from complexifying the applicative; I have been told that complexity often leads to poor performance. However mmap()
is usual for sequential access, which is not the purpose here.
Because this is pure random access, there is few chance that two accesses will be in the same RAM-loaded page. A full 4kb page will be swapped from the HDD to the RAM, just for a 4 bytes data... This is useless loading of buses and will probably result in poor performances.
Hope this help.
Probably for a 4TB linear dataset you don't need a file system. I guess a raw device access may bring some performance benefits.
Also probably there is a way to optimize the queries or the data structure, so that caching could be used more efficiently?