While processing multiple gigabyte files I noticed something odd: it seems that reading from a file using a filechannel into a re-used ByteBuffer object allocated with allocateD
When you have a loop which iterates more than 10,000 times it can trigger the whole method to be compiled to native code. However, your later loops have not been run and cannot be optimised to the same degree. To avoid this issue, place each loop in a different method and run again.
Additionally, you may want to set the Order for the ByteBuffer to be order(ByteOrder.nativeOrder()) to avoid all the bytes swapping around when you do a getLong
and read more than 24 bytes at a time. (As reading very small portions generates much more system calls) Try reading 32*1024 bytes at a time.
I wound also try getLong
on the MappedByteBuffer with native byte order. This is likely to be the fastest.