I have a file from which I read data. All the text from this file is stored in a String variable (a very big variable). Then in another part of my app I want to walk through thi
If you can loosen your requirements a bit you could implement a java.lang.CharSequence backed by your file.
The CharSequence is supported many places in the JDK (A String is a CharSequence) . So this is a good alternative to a Reader-based implementation.
Others have suggested reading and processing portions of your file at a single time. If possible, one of those ways would be better.
However, if this is not possible and you are able to load the String
initially into memory as you indicate but it is later parsing of this string that creates problems, you may be able to use substrings. In Java a sub-string maps on top of the original char
array and just takes memory for the base Object
and then the start and length int pointers.
So, when you find a portion of the string that you want to keep separately, use something like:
String piece = largeString.substring(foundStart, foundEnd);
If you instead this or code that internally does this, then the memory use will increase dramatically:
new String(largeString.substring(foundStart, foundEnd));
Note that you must use String.substring()
with care for this very reason. You could have a very large string off of which you take a substring and then discard your reference to the original string. The problem is the substring still references the original large char
array. The GC will not release that until the substring also is removed. In cases like this, it's useful to actually use new String(...)
to ensure the unused large array will be discarded by the GC (this is one of the few cases where you should ever use new String(...)
).
Another technique, if you expect to have lots of little strings around and these are likely to have the same values, but come from an external source (like a file), is to use .intern()
after creating the new string.
Note: This does depend on the implementation of String
which you really shouldn't have to be aware of, but in practice for large applications sometimes you do have to rely on that knowledge. Be aware that future versions of Java may change this (though not likely).
You should be using the BufferedInputReader instead of storing this all into one large string.
If what you want to parse happens to be on the same line, then StringTokenizer will work quite nicely, else you have to devise a way to read what you want from the file to parse out statements, then apply StringTokenizer to each statement.
You must review your algorithm for dealing woth large data. You must process chunk-by-chank this data, or use random file access without storing data in memory. For example you can use StringTokenizer or StreamTokenizer as said @Zombies. You can see parser-lexer techniques: when parser parses some expression it asks to lexer to read next lexem(tokens), but doesn't reads whole input stream at once.