HashSet of Strings taking up too much memory, suggestions…?

前端 未结 8 1802
闹比i
闹比i 2020-12-29 08:44

I am currently storing a list of words (around 120,000) in a HashSet, for the purpose of using as a list to check enetered words against to see if they are spelt correctly,

相关标签:
8条回答
  • 2020-12-29 09:22

    One way to save memory to save memory is to use a radix tree. This is better than a trie as the prefixes are not stored redundantly.

    As your dictionary is fixed another way is to build a perfect hash function for it. Your hash set does not need buckets (and the associated overhead) as there cannot be collisions. Every implementation of a hash table/hash set that uses open addressing can be used for this (like google collection's ImmutableSet).

    0 讨论(0)
  • 2020-12-29 09:24

    You could use a prefix tree or trie: http://en.wikipedia.org/wiki/Trie

    0 讨论(0)
  • 2020-12-29 09:32

    HashSet is probably not the right structure for this. Use Trie instead.

    0 讨论(0)
  • 2020-12-29 09:35

    This might be a bit late but using Google you can easily find the DAWG investigation and C code that I posted a while ago.

    http://www.pathcom.com/~vadco/dawg.html

    TWL06 - 178,691 words - fits into 494,676 Bytes

    The downside of a compressed-shared-node structure is that it does not work as a hash function for the words in your list. That is to say, it will tell you if a word exists, but it will not return an index to related data for a word that does exist.

    If you want the perfect and complete hash functionality, in a processor-cache sized structure, you are going to have to read, understand, and modify a data structure called the ADTDAWG. It will be slightly larger than a traditional DAWG, but it is faster and more useful.

    http://www.pathcom.com/~vadco/adtdawg.html

    All the very best,

    JohnPaul Adamovsky

    0 讨论(0)
  • 2020-12-29 09:39

    You can also try Radix Tree(Wiki,Implementation) .This some what like trie but more memory efficient.

    0 讨论(0)
  • 2020-12-29 09:40

    12MB to store 120,000 words is about 100 bytes per word. Probably at least 32 bytes of that is String overhead. If words average 10 letters and they are stored as 2-byte chars, that accounts for another 20 bytes. Then there is the reference to each String in your HashSet, which is probably another 4 bytes. The remaining 44 bytes is probably the HashSet entry and indexing overhead, or something I haven't considered above.

    The easiest thing to go after is the overhead of the String objects themselves, which can take far more memory than is required to store the actual character data. So your main approach would be to develop a custom representation that avoids storing a separate object for each string. In the course of doing this, you can also get rid of the HashSet overhead, since all you really need is a simple word lookup, which can be done by a straightforward binary search on an array that will be part of your custom implementation.

    You could create your custom implementation as an array of type int with one element for each word. Each of these int elements would be broken into sub-fields that contain a length and an offset that points into a separate backing array of type char. Put both of these into a class that manages them, and that supports public methods allowing you to retrieve and/or convert your data and individual characters given a string index and an optional character index, and to perform the simple searches on the list of words that are needed for your spell check feature.

    If you have no more than 16777216 characters of underlying string data (e.g., 120,000 strings times an average length of 10 characters = 1.2 million chars), you can take the low-order 24 bits of each int and store the starting offset of each string into your backing array of char data, and take the high-order 8 bits of each int and store the size of the corresponding string there.

    Your char data will have your erstwhile strings crammed together without any delimiters, relying entirely upon the int array to know where each string starts and ends.

    Taking the above approach, your 120,000 words (at an average of 10 letters each) would require about 2,400,000 bytes of backing array data and 480,000 bytes of integer index data (120,000 x 4 bytes), for a total of 2,880,000 bytes, which is about a 75 percent savings over the present 12MB amount you have reported above.

    The words in the arrays would be sorted alphabetically, and your lookup process could be a simple binary search on the int array (retrieving the corresponding words from the char array for each test), which should be very efficient.

    If your words happen to be entirely ASCII data, you could save an additional 1,200,000 bytes by storing the backing data as bytes instead of as chars.

    This could get more difficult if you needed to alter these strings. Apparently, in your case (spell checker), you don't need to (unless you want to support user additions to the list, which would be infrequent anyway, and so re-writing the char data and indexes to add or delete words might be acceptable).

    0 讨论(0)
提交回复
热议问题