Spark throws java.io.IOException: Failed to rename when saving part-xxxxx.gz

前端 未结 2 1456
栀梦
栀梦 2021-02-09 17:24

New Spark user here. I\'m extracting features from many .tif images stored on AWS S3, each with identifier like 02_R4_C7. I\'m using Spark 2.2.1 and hadoop 2.7.2.

I\'m

2条回答
  •  感情败类
    2021-02-09 17:34

    It's not safe to use S3 as a direct destination of work without a "consistency layer" (Consistent EMR, or from the Apache Hadoop project itself, S3Guard), or a Special output committer designed explicitly for work with S3 (Hadoop 3.1+ "the S3A committers"). Rename is where things fail, as listing inconsistency means that the scan for files to copy may miss data, or find deleted files which it can't rename. Your stack trace looks exactly how I'd expect this to surface: job commits failing apparently at random.

    Rather than go into the details, here's a video of Ryan Blue on the topic

    Workaround: write to your local cluster FS then use distcp to upload to S3.

    PS: for Hadoop 2.7+, switch to the s3a:// connector. It has exactly the same consistency problem without S3Guard enabled, but better performance.

提交回复
热议问题