I have a pseudocode in python that reads from a Kafka stream and upsert documents in Elasticsearch (incrementing a counter view
if the document exists already.
You should be able to do it by setting write mode "update" ( or upsert) and passing your script as "script" (depends on ES version).
EsSpark.saveToEs(rdd, "spark/docs", Map("es.mapping.id" -> "id", "es.write.operation" -> "update","es.update.script.inline" -> "your script" , ))
Probably you want to use "upsert"
There are some good unit tests in cascading integration in same library; These settings should be good for spark as both uses same writer.
I suggest to read unit tests to pick correct settings for your ES version.