Beginner ES Question here
What is the workflow or steps for pushing a Spark Dataframe to Elastic Search?
From research, I believe I need to use
Managed to find an answer so I'll share. Spark DF's (from pyspark.sql) don't currently support the newAPIHadoopFile()
methods; however, df.rdd.saveAsNewAPIHadoopFile()
was giving me errors as well. The trick was to convert the df to strings via the following function
def transform(doc):
import json
import hashlib
_json = json.dumps(doc)
keys = doc.keys()
for key in keys:
if doc[key] == 'null' or doc[key] == 'None':
del doc[key]
if not doc.has_key('id'):
id = hashlib.sha224(_json).hexdigest()
doc['id'] = id
else:
id = doc['id']
_json = json.dumps(doc)
return (id, _json)
So my JSON workflow is:
1: df = spark.read.json('XXX.json')
2: rdd_mapped = df.rdd.map(lambda y: y.asDict())
3: final_rdd = rdd_mapped.map(transform)
4:
final_rdd.saveAsNewAPIHadoopFile(
path='-',
outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat",
keyClass="org.apache.hadoop.io.NullWritable",
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable",
conf={ "es.resource" : " / ", "es.mapping.id":"id",
"es.input.json": "true", "es.net.http.auth.user":"elastic",
"es.write.operation":"index", "es.nodes.wan.only":"false",
"es.net.http.auth.pass":"changeme", "es.nodes":", , ...",
"es.port":"9200" })
More information on ES arguments can be found here (Scroll to 'Configuration')