Spark 2.0 deprecates 'DirectParquetOutputCommitter', how to live without it?

前端 未结 2 1826
情歌与酒
情歌与酒 2021-01-31 10:57

Recently we migrated from \"EMR on HDFS\" --> \"EMR on S3\" (EMRFS with consistent view enabled) and we realized the Spark \'SaveAsTable\' (parquet format) writes to S3 were ~4x

2条回答
  •  悲哀的现实
    2021-01-31 11:24

    You can use: sparkContext.hadoopConfiguration.set("mapreduce.fileoutputcommitter.algorithm.version", "2")

    since you are on EMR just use s3 (no need for s3a)

    We are using Spark 2.0 and writing Parquet to S3 pretty fast (about as fast as HDFS)

    if you want to read more check out this jira ticket SPARK-10063

提交回复
热议问题