I\'m currently writing something that needs to handle very large text files (a few GiB at least). What\'s needed here (and this is fixed) is:
CharBuffer assumes all characters are UTF-16 or UCS-2 (perhaps someone knows the difference)
The problem using a proper text format is that you need to read every byte to know where the n-th character is or where the n'th line is. I use multi-GB text files but assume ASCII-7 data, and I only read/write sequentially.
If you want random access on an unindexed text file, you can't expect it to be performant.
If you are willing to buy a new server you can get one with 24 GB for around £1,800 and 64GB for around £4,200. These would allow you to load even multi-GB files into memory.