Recently we migrated from \"EMR on HDFS\" --> \"EMR on S3\" (EMRFS with consistent view enabled) and we realized the Spark \'SaveAsTable\' (parquet format) writes to S3 were ~4x
I think the S3 committer from Netflix is already open sourced at: https://github.com/rdblue/s3committer.
You can use: sparkContext.hadoopConfiguration.set("mapreduce.fileoutputcommitter.algorithm.version", "2")
since you are on EMR just use s3 (no need for s3a)
We are using Spark 2.0 and writing Parquet to S3 pretty fast (about as fast as HDFS)
if you want to read more check out this jira ticket SPARK-10063