Memory-Mapped MappedByteBuffer or Direct ByteBuffer for DB Implementation?

后端 未结 2 1796
抹茶落季
抹茶落季 2020-12-23 17:40

This looks like a long question because of all the context. There are 2 questions inside the novel below. Thank you for taking the time to read this and provide assistan

相关标签:
2条回答
  • 2020-12-23 18:09

    I think you shouldn't worry about mmap'ping files up to 2GB in size.

    Looking at the sources of MongoDB as an example of DB making use of memory mapped files you'll find it always maps full data file in MemoryMappedFile::mapWithOptions() (which calls MemoryMappedFile::map()). DB data spans across multiple files each up to 2GB in size. Also it preallocates data files so there's no need to remap as the data grows and this prevents file fragmentation. Generally you can inspire yourself with the source code of this DB.

    0 讨论(0)
  • 2020-12-23 18:25

    You might be interested in https://github.com/peter-lawrey/Java-Chronicle

    In this I create multiple memory mappings to the same file (the size is a power of 2 up to 1 GB) The file can be any size (up to the size of your hard drive)

    It also creates an index so you can find any record at random and each record can be any size.

    It can be shared between processes and used for low latency events between processes.

    I make the assumption you are using a 64-bit OS if you want to use large amounts of data. In this case a List of MappedByteBuffer will be all you ever need. It makes sense to use the right tools for the job. ;)

    I have found it performance well even with data sizes around 10x your main memory size (I was using a fast SSD drive so YMMV)

    0 讨论(0)
提交回复
热议问题