I'm doing some work for a client that has 2 separate AWS accounts. We need to move all the files in a bucket on one of their S3 accounts to a new bucket on the 2nd account.
We thought that s3cmd would allow this, using the format:
s3cmd cp s3://bucket1 s3://bucket2 --recursive
However this only allows me to use the keys of one account and I can't specify the accounts of the 2nd account.
Is there a way to do this without downloading the files and uploading them again to the 2nd account?
You don't have to open permissions to everyone. Use the below Bucket policies on source and destination for copying from a bucket in one account to another using an IAM user
Bucket to Copy from:
SourceBucket
Bucket to Copy to:
DestinationBucket
Source AWS Account ID:
XXXX–XXXX-XXXX
Source IAM User:
src–iam-user
The below policy means – the IAM user - XXXX–XXXX-XXXX:src–iam-user
has s3:ListBucket
and s3:GetObject
privileges on SourceBucket/*
and s3:ListBucket
and s3:PutObject
privileges on DestinationBucket/*
On the SourceBucket the policy should be like:
{
"Id": "Policy1357935677554",
"Statement": [{
"Sid": "Stmt1357935647218",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::SourceBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}, {
"Sid": "Stmt1357935676138",
"Action": ["s3:GetObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::SourceBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}]
}
On the DestinationBucket the policy should be:
{
"Id": "Policy1357935677555",
"Statement": [{
"Sid": "Stmt1357935647218",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::DestinationBucket",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}, {
"Sid": "Stmt1357935676138",
"Action": ["s3:PutObject"],
"Effect": "Allow",
"Resource": "arn:aws:s3:::DestinationBucket/*",
"Principal": {"AWS": "arn:aws:iam::XXXXXXXXXXXX:user/src–iam-user"}
}]
}
Command to be run is s3cmd cp s3://SourceBucket/File1 s3://DestinationBucket/File1
Bandwidth inside AWS does not count, so you could save some money and time by doing it all from a box inside AWS, as long as the buckets are in the same region.
As for doing it without having the file touch down on a computer somewhere - don't think so.
Except:Since they do bulk uploads from hard drives you mail to them, they might do the same for you for a bucket to bucket transfer.
I would suggest using cloudberry s3 explorer , as a simple solution to get things moving quickly. Also it allows you to make use of internal aws bandwidth free transfer services.
You can also use the cloudberry sdk tools to integrate into your apps.
Good luck Jon
Even if Roles and Policies are an really elegant way, I've another solution:
- Get your AWS-Credentials for Source-Buckets-Account
- Same for Destination-Buckets-Account
On your local machine (Desktop or any Server outside of AWS) create a new profile with the Credentials of the Source-Bucket-Accounts.
aws --profile ${YOUR_CUSTOM_PROFILE} configure
fill in aws_access_key_id and aws_secret_access_key (you may skip Region and Output)
Save your Destination-Bucket-Credentials as Environment-Variables
export AWS_ACCESS_KEY_ID=AKI...
export AWS_SECRET_ACCESS_KEY=CN...
Now do the sync, but add the crucial "profile"-Parameter
aws --profile ${YOUR_CUSTOM_PROFILE} s3 sync s3://${SOURCE_BUCKET_NAME} s3://${DESTINATION_BUCKET_NAME}
Ok, figured out an answer. There are likely other ways to do this too, but this is very easy.
I was able to do this with the s3cmd utility, but can likely be done with similar tools.
When you configure s3cmd, configure it with your account Access Key and Secret Access Key.
Log into the S3 web console using the account of the bucket you are transferring to.
Visit the S3 web console.
https://console.aws.amazon.com/s3/home
Click on your bucket, then Actions, then Properties.
At the bottom under the "Permissions" tab click "Add more permissions".
Set "Grantee" to Everyone
Check "List" and "Upload/Delete"
Save
To transfer, run from your terminal
s3cmd cp s3://from_account_bucket s3://to_account_bucket --recursive
When the transfer is complete, you should immediately visit the S3 console again and remove the permissions you added for the bucket.
There is obviously a security problem here. The bucket we're transferring to is open to everyone. The chances of someone finding your bucket name are small, but do exist.
You can use bucket policies as an alternate way to only open access to specific accounts, but that was too bloody difficult for me to do so I leave that as an exercise to those who need to figure that out.
Hope this helps.
来源:https://stackoverflow.com/questions/12700921/s3-moving-files-between-buckets-on-different-accounts