SOLR - Best approach to import 20 million documents from csv file

后端 未结 5 956
有刺的猬
有刺的猬 2020-12-29 09:29

My current task on hand is to figure out the best approach to load millions of documents in solr. The data file is an export from DB in csv format.

Currently, I am t

相关标签:
5条回答
  • 2020-12-29 10:03

    Above answers have explained really well the ingestion strategies from single machine.

    Few more options if you have big data infrastructure in place and want to implement distributed data ingestion pipeline.

    1. Use sqoop to bring data to hadoop or place your csv file manually in hadoop.
    2. Use one of below connector to ingest data:

    hive- solr connector, spark- solr connector.

    PS:

    • Make sure no firewall blocks connectivity between client nodes and solr/solrcloud nodes.
    • Choose right directory factory for data ingestion, if near real time search is not required then use StandardDirectoryFactory.
    • If you get below exception in client logs during ingestion then tune autoCommit and autoSoftCommit configuration in solrconfig.xml file.

    SolrServerException: No live SolrServers available to handle this request

    0 讨论(0)
  • 2020-12-29 10:04

    Definitely just load these into a normal database first. There's all sorts of tools for dealing with CSVs (for example, postgres' COPY), so it should be easy. Using Data Import Handler is also pretty simple, so this seems like the most friction-free way to load your data. This method will also be faster since you won't have unnecessary network/HTTP overhead.

    0 讨论(0)
  • 2020-12-29 10:08

    Unless a database is already part of your solution, I wouldn't add additional complexity to your solution. Quoting the SOLR FAQ it's your servlet container that is issuing the session time-out.

    As I see it, you have a couple of options (In my order of preference):

    Increase container timeout

    Increase the container timeout. ("maxIdleTime" parameter, if you're using the embedded Jetty instance).

    I'm assuming you only occasionally index such large files? Increasing the time-out temporarily might just be simplest option.

    Split the file

    Here's the simple unix script that will do the job (Splitting the file in 500,000 line chunks):

    split -d -l 500000 data.csv split_files.
    for file in `ls split_files.*`
    do  
    curl 'http://localhost:8983/solr/update/csv?fieldnames=id,name,category&commit=true' -H 'Content-type:text/plain; charset=utf-8' --data-binary @$file
    done
    

    Parse the file and load in chunks

    The following groovy script uses opencsv and solrj to parse the CSV file and commit changes to Solr every 500,000 lines.

    import au.com.bytecode.opencsv.CSVReader
    
    import org.apache.solr.client.solrj.SolrServer
    import org.apache.solr.client.solrj.impl.CommonsHttpSolrServer
    import org.apache.solr.common.SolrInputDocument
    
    @Grapes([
        @Grab(group='net.sf.opencsv', module='opencsv', version='2.3'),
        @Grab(group='org.apache.solr', module='solr-solrj', version='3.5.0'),
        @Grab(group='ch.qos.logback', module='logback-classic', version='1.0.0'),
    ])
    
    SolrServer server = new CommonsHttpSolrServer("http://localhost:8983/solr/");
    
    new File("data.csv").withReader { reader ->
        CSVReader csv = new CSVReader(reader)
        String[] result
        Integer count = 1
        Integer chunkSize = 500000
    
        while (result = csv.readNext()) {
            SolrInputDocument doc = new SolrInputDocument();
    
            doc.addField("id",         result[0])
            doc.addField("name_s",     result[1])
            doc.addField("category_s", result[2])
    
            server.add(doc)
    
            if (count.mod(chunkSize) == 0) {
                server.commit()
            }
            count++
        }
        server.commit()
    }
    
    0 讨论(0)
  • 2020-12-29 10:12

    In SOLR 4.0 (currently in BETA), CSV's from a local directory can be imported directly using the UpdateHandler. Modifying the example from the SOLR Wiki

    curl http://localhost:8983/solr/update?stream.file=exampledocs/books.csv&stream.contentType=text/csv;charset=utf-8
    

    And this streams the file from the local location, so no need to chunk it up and POST it via HTTP.

    0 讨论(0)
  • 2020-12-29 10:12

    The reference guide says ConcurrentUpdateSolrServer could/should be used for bulk updates.

    Javadocs are somewhat incorrect (v 3.6.2, v 4.7.0):

    ConcurrentUpdateSolrServer buffers all added documents and writes them into open HTTP connections.

    It doesn't buffer indefinitely, but up to int queueSize, which is a constructor parameter.

    0 讨论(0)
提交回复
热议问题