I\'ve been trying for past couple of hours to setup a transfer from S3 to my google storage bucket.
The error that i keep getting, when creating the transfer is: \"I
I'm one of the devs on Transfer Service.
You'll need to add "s3:GetBucketLocation" to your permissions.
It would be preferable if the error you received was more specifically about your ACLs, however, rather than an invalid key. I'll look into that.
EDIT: Adding more info to this post. There is documentation which lists this requirement: https://cloud.google.com/storage/transfer/
Here's a quote from the section on "Configuring Access":
"If your source data is an Amazon S3 bucket, then set up an AWS Identity and Access Management (IAM) user so that you give the user the ability to list the Amazon S3 bucket, get the location of the bucket, and read the objects in the bucket." [Emphasis mine.]
EDIT2: Much of the information provided in this answer could be useful for others, so it will remain here, but John's answer actually got to the bottom of OP's issue.
I encountered the same problem couple of minutes ago. And I was easily able to solve it by giving admin access key and secret key.
It worked for me. just FYI, my s3 bucket was north-Virginia.
I am an engineer on Transfer service. The reason you encountered this problem is that AWS S3 region ap-southeast-1 (Singapore) is not yet supported by the Transfer service, because GCP does not have networking arrangement with AWS S3 in that region. We can consider to support that region now but your transfer will be much slower than other regions.
On our end, we are making a fix to display a clearer error message.
You can also get the 'Invalid access key' error if you try to transfer a subdirectory rather than a root S3 bucket. For example, I tried to transfer s3://my-bucket/my-subdirectory
and it kept failing with the invalid access key error, despite me giving read permissions to google for the entire S3 bucket. It turns out the google transfer service doesn't support transferring subdirectories of the S3 bucket, you must specify the root as the source for the transfer: s3://my-bucket
.
May Be this can help:
First, specify the S3_host in you boto config file, i.e., the endpoint-url containing region (No need to specify s3-host if the region is us-east-1, which is default). eg,
vi ~/.boto
s3_host = s3-us-west-1.amazonaws.com
That is it, Now you can proceed with any one of these commands:
gsutil -m cp -r s3://bucket-name/folder-name gs://Bucket/
gsutil -m cp -r s3://bucket-name/folder-name/specific-file-name gs://Bucket/
gsutil -m cp -r s3://bucket-name/folder-name/ gs://Bucket/*
gsutil -m cp -r s3://bucket-name/folder-name/file-name-Prefix gs://Bucket/**
You can also try rsync.
https://cloud.google.com/storage/docs/gsutil/commands/rsync