How to configure Spark / Glue to avoid creation of empty $_folder_$ after Glue job successful execution

ⅰ亾dé卋堺 提交于 2021-01-24 13:47:41

问题


I have a simple glue etl job which is triggered by Glue workflow. It drop duplicates data from a crawler table and writes back the result into a S3 bucket. The job is completed successfully . However the empty folders that spark generates "$folder$" remain in s3. It does not look nice in the hierarchy and causes confusion. Is there any way to configure spark or glue context to hide/remove these folders after successful completion of the job?

---------------------S3 image ---------------------


回答1:


Ok finally after few days of testing I found the solution. Before pasting the code let me summarize what I have found ...

  • Those $folder$ are created via Hadoop .Apache Hadoop creates these files when to create a folder in an S3 bucket. Source1 They are actually directory markers as path + /. Source 2
  • To change the behavior , you need to change the Hadoop S3 write configuration in Spark context. Read this and this and this
  • Read about S3 , S3a and S3n here and here
  • Thanks to @stevel 's comment here

Now the solution is to set the following configuration in Spark context Hadoop.

sc = SparkContext()
hadoop_conf = sc._jsc.hadoopConfiguration()
hadoop_conf.set("fs.s3.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")

To avoid creation of SUCCESS files you need to set the following configuration as well : hadoop_conf.set("mapreduce.fileoutputcommitter.marksuccessfuljobs", "false")

Make sure you use the S3 URI for writing to s3 bucket. ex:

myDF.write.mode("overwrite").parquet('s3://XXX/YY',partitionBy['DDD'])


来源:https://stackoverflow.com/questions/65667996/how-to-configure-spark-glue-to-avoid-creation-of-empty-folder-after-glue-j

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!