Why does Lucene cause OOM when indexing large files?

后端 未结 5 554
一向
一向 2021-01-13 05:01

I’m working with Lucene 2.4.0 and the JVM (JDK 1.6.0_07). I’m consistently receiving OutOfMemoryError: Java heap space, when trying to index large text files.<

相关标签:
5条回答
  • 2021-01-13 05:43

    We experienced some similar "out of memory" problems earlier this year when building our search indexes for our maven repository search engine at jarvana.com. We were building the indexes on a 64 bit Windows Vista quad core machine but we were running 32 bit Java and 32 bit Eclipse. We had 1.5 GB of RAM allocated for the JVM. We used Lucene 2.3.2. The application indexes about 100GB of mostly compressed data and our indexes end up being about 20GB.

    We tried a bunch of things, such as flushing the IndexWriter, explicitly calling the garbage collector via System.gc(), trying to dereference everything possible, etc. We used JConsole to monitor memory usage. Strangely, we would quite often still run into “OutOfMemoryError: Java heap space” errors when they should not have occurred, based on what we were seeing in JConsole. We tried switching to different versions of 32 bit Java, and this did not help.

    We eventually switched to 64 bit Java and 64 bit Eclipse. When we did this, our heap memory crashes during indexing disappeared when running with 1.5GB allocated to the 64 bit JVM. In addition, switching to 64 bit Java let us allocate more memory to the JVM (we switched to 3GB), which sped up our indexing.

    Not sure exactly what to suggest if you're on XP. For us, our OutOfMemoryError issues seemed to relate to something about Windows Vista 64 and 32 bit Java. Perhaps switching to running on a different machine (Linux, Mac, different Windows) might help. I don't know if our problems are gone for good, but they appear to be gone for now.

    0 讨论(0)
  • 2021-01-13 05:45

    Profiling is the only way to determine, such large memory consumption.

    Also, in your code,you are not closing the Filehandlers,Indexreaders,Inderwriters, perhaps the culprit for OOM,

    0 讨论(0)
  • 2021-01-13 05:48

    For hibernate users (using mysql) and also using grails (via searchable plugin).

    I kept getting OOM errors when indexing 3M rows and 5GB total of data.

    These settings seem to have fixed the problem w/o requiring me to write any custom indexers.

    here are some things to try:

    Compass settings:

            'compass.engine.mergeFactor':'500',
            'compass.engine.maxBufferedDocs':'1000'
    

    and for hibernate (not sure if it's necessary, but might be helping, esp w/ mysql which has jdbc result streaming disabled by default. [link text][1]

            hibernate.jdbc.batch_size = 50  
            hibernate.jdbc.fetch_size = 30
            hibernate.jdbc.use_scrollable_resultset=true
    

    Also, it seems specially for mysql, had to add some url parameters to the jdbc connection string.

            url = "jdbc:mysql://127.0.0.1/mydb?defaultFetchSize=500&useCursorFetch=true"
    

    (update: with the url parameters, memory doesn't go above 500MB)

    In any case, now I'm able to build my lucene / comapss index with less than 2GB heap size. Previously I needed 8GB to avoid OOM. Hope this helps someone.

    [1]: http://dev.mysql.com/doc/refman/5.0/en/connector-j-reference-implementation-notes.html mysql streaming jdbc resultset

    0 讨论(0)
  • 2021-01-13 05:53

    You can set the IndexWriter to flush based on memory usage or # of documents - I would suggest setting it to flsuh based on memory and seeing if this fixes your issue. My guess is your entire index is living in memory because you never flush it to disk.

    0 讨论(0)
  • 2021-01-13 05:57

    In response as a comment to Gandalf

    I can see you are setting the setMergeFactor to 1000

    the API says

    setMergeFactor

    public void setMergeFactor(int mergeFactor)

    Determines how often segment indices are merged by addDocument(). With smaller values, less RAM is used while indexing, and searches on unoptimized indices are faster, but indexing speed is slower. With larger values, more RAM is used during indexing, and while searches on unoptimized indices are slower, indexing is faster. Thus larger values (> 10) are best for batch index creation, and smaller values (< 10) for indices that are interactively maintained.

    This method is a convenience method, it uses the RAM as you increase the mergeFactor

    What i would suggest is set it to something like 15 or so on.; (on a trial and error basis) complemented with setRAMBufferSizeMB, also call Commit(). then optimise() and then close() the indexwriter object.(probably make a JavaBean and put all these methods in one method) call this method when you are closing the index.

    post with your result, feedback =]

    0 讨论(0)
提交回复
热议问题