Java: fastest way to do random reads on huge disk file(s)

前端 未结 4 661
走了就别回头了
走了就别回头了 2021-02-04 19:12

I\'ve got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (cr

相关标签:
4条回答
  • 2021-02-04 19:37

    800MB is not that much to load up and store in memory. If you can afford to have multicore machines ripping away at a data set for days on end, you can afford an extra GB or two of RAM, no?

    That said, read up on Java's java.nio.MappedByteBuffer. It is clear from your comment "I think I don't want to map the 800 MB in memory" that the concept is not clear.

    In a nut shell, a mapped byte buffer allows one to programmatically access the data as it were in memory, although it may be on disk or in memory--this is for the OS to decide, as Java's MBB is based on the OS's Virtual Memory subsystem. It is also nice and fast. You will also be able to access a single MBB from multiple threads safely.

    Here are the steps I recommend you take:

    1. Instantiate a MappedByteBuffer that maps your data file to the MBB. The creation is kinda expensive, so keep it around.
    2. In your look up method...
      1. instantiate a byte[4] array
      2. call .get(byte[] dst, int offset, int length)
      3. the byte array will now have your data, which you can turn into a value

    And presto! You have your data!

    I'm a big fan of MBBs and have used them successfully for such tasks in the past.

    0 讨论(0)
  • 2021-02-04 19:40

    RandomAccessFile (blocking) may help: http://java.sun.com/javase/6/docs/api/java/io/RandomAccessFile.html

    You can also use FileChannel.map() to map a region of file to memory, then read the MappedByteBuffer.

    See also: http://java.sun.com/docs/books/tutorial/essential/io/rafs.html

    0 讨论(0)
  • 2021-02-04 19:56

    Actually 800 MB isn't very big. If you have 2 GB of memory or more, it can reside in disk cache if not in your application itself.

    0 讨论(0)
  • 2021-02-04 20:01

    For the write case, on Java 7, AsynchronousFileChannel should be looked at.

    When performing random record-oriented writes across large files (exceeding physical memory so caching isn't helping everything) on NTFS, I find that AsynchronousFileChannel performs over twice as many operations, in single-threaded mode, versus a normal FileChannel (on a 10GB file, 160 byte records, completely random writes, some random content, several hundred iterations of benchmarking loop to achieve steady state, roughly 5,300 writes per second).

    My best guess is that because the asynchronous io boils down to overlapped IO in Windows 7, the NTFS file system driver is able to update its own internal structures faster when it doesn't have to create a sync point after every call.

    I micro-benchmarked against RandomAccessFile to see how it would perform (results are very close to FileChannel, and still half of the performance of AsynchronousFileChannel.

    Not sure what happens with multi-threaded writes. This is on Java 7, on an SSD (the SSD is an order of magnitude faster than magnetic, and another order of magnitude faster on smaller files that fit in memory).

    Will be interesting to see if the same ratios hold on Linux.

    0 讨论(0)
提交回复
热议问题