For checkout purpose I try to set up an Amazon S3 bucket as checkpoint file.
val checkpointDir = \"s3a://bucket-name/checkpoint.txt\"
val sc = new SparkConte
If you'd like to anyway use the region that supports Signature V4 in spark you can pass flag -Dcom.amazonaws.services.s3.enableV4
to the driver options and executor options on runtime. For example:
spark-submit --conf spark.driver.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
--conf spark.executor.extraJavaOptions='-Dcom.amazonaws.services.s3.enableV4' \
... (other spark options)
With this settings Spark is able to write to Frankfurt (and other V4-only regions) even with not-so-fresh AWS sdk version (com.amazonaws:aws-java-sdk:1.7.4
in my case)
This message correspond to something like "bad endpoint" or bad signature version support.
like seen here frankfurt is the only one that not support signature version 2. I it's the one I pick.
Of course after all my reserch can't say what is signature version, it's not obvious in the documentation. But the V2 seems to work with s3a.
The endpoint seen in the S3 interface is not the real endpoint it's just the web endpoint.
you have to use once a theses endpoint like that
sc.hadoopConfiguration.set("fs.s3a.endpoint", "s3-eu-west-1.amazonaws.com")
But it's work by default with US endpoint
I was facing the same issue when running spark locally, for me reason was SIGV4 was not getting set, this code helped me:
import com.amazonaws.SDKGlobalConfiguration
System.setProperty(SDKGlobalConfiguration.ENABLE_S3_SIGV4_SYSTEM_PROPERTY, "true")