Java MemoryMapping big files

浪尽此生 提交于 2019-12-06 05:13:45

问题


The Java limitation of MappedByteBuffer to 2GIG make it tricky to use for mapping big files. The usual recommended approach is to use an array of MappedByteBuffer and index it through:

long PAGE_SIZE = Integer.MAX_VALUE;
MappedByteBuffer[] buffers;

private int getPage(long offset) {
    return (int) (offset / PAGE_SIZE)
}

private int getIndex(long offset) {
    return (int) (offset % PAGE_SIZE);
}

public byte get(long offset) {
    return buffers[getPage(offset)].get(getIndex(offset));
}

this can be a working for single bytes, but requires rewriting a lot of code if you want to handle read/writes that are bigger and require crossing boundaries (getLong() or get(byte[])).

The question: what is your best practice for these kind of scenarios, do you know any working solution/code that can be re-used without re-inventing the wheel?


回答1:


Have you checked out dsiutil's ByteBufferInputStream?

Javadoc

The main usefulness of this class is that of making it possible creating input streams that are really based on a MappedByteBuffer.

In particular, the factory method map(FileChannel, FileChannel.MapMode) will memory-map an entire file into an array of ByteBuffer and expose the array as a ByteBufferInputStream. This makes it possible to access easily mapped files larger than 2GiB.

  • long length()
  • long position()
  • void position(long newPosition)

Is that something you were thinking of? It's LGPL too.



来源:https://stackoverflow.com/questions/5675748/java-memorymapping-big-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!