How To Push a Spark Dataframe to Elastic Search (Pyspark)

后端 未结 2 1704
温柔的废话
温柔的废话 2021-02-06 09:13

Beginner ES Question here

What is the workflow or steps for pushing a Spark Dataframe to Elastic Search?

From research, I believe I need to use

相关标签:
2条回答
  • 2021-02-06 09:40

    Managed to find an answer so I'll share. Spark DF's (from pyspark.sql) don't currently support the newAPIHadoopFile() methods; however, df.rdd.saveAsNewAPIHadoopFile() was giving me errors as well. The trick was to convert the df to strings via the following function

    def transform(doc):
        import json
        import hashlib
    
        _json = json.dumps(doc)
        keys = doc.keys()
        for key in keys:
            if doc[key] == 'null' or doc[key] == 'None':
                del doc[key]
        if not doc.has_key('id'):
            id = hashlib.sha224(_json).hexdigest()
            doc['id'] = id
        else:
            id = doc['id']
        _json = json.dumps(doc)
        return (id, _json)
    

    So my JSON workflow is:

    1: df = spark.read.json('XXX.json')

    2: rdd_mapped = df.rdd.map(lambda y: y.asDict())

    3: final_rdd = rdd_mapped.map(transform)

    4:

    final_rdd.saveAsNewAPIHadoopFile(
         path='-', 
         outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
         keyClass="org.apache.hadoop.io.NullWritable",  
         valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", 
         conf={ "es.resource" : "<INDEX> / <INDEX>", "es.mapping.id":"id", 
             "es.input.json": "true", "es.net.http.auth.user":"elastic",
             "es.write.operation":"index", "es.nodes.wan.only":"false",
             "es.net.http.auth.pass":"changeme", "es.nodes":"<NODE1>, <NODE2>, <NODE3>...",
             "es.port":"9200" })
    

    More information on ES arguments can be found here (Scroll to 'Configuration')

    0 讨论(0)
  • 2021-02-06 09:54

    This worked for me - I had my data in df.

    df = df.drop('_id')
    df.write.format(
        "org.elasticsearch.spark.sql"
    ).option(
        "es.resource", '%s/%s' % (conf['index'], conf['doc_type'])
    ).option(
        "es.nodes", conf['host']
    ).option(
        "es.port", conf['port']
    ).save()
    

    I had used this command to submit my job - /path/to/spark-submit --master spark://master:7077 --jars ./jar_files/elasticsearch-hadoop-5.6.4.jar --driver-class-path ./jar_files/elasticsearch-hadoop-5.6.4.jar main_df.py.

    0 讨论(0)
提交回复
热议问题