I am trying to grant permissions to an existing account in s3.
The bucket is owned by the account, but the data was copied from another account\'s bucket.
Wh
The python code is more efficient this way, otherwise it takes a lot longer.
import boto3
import sys
client = boto3.client('s3')
BUCKET='mybucket'
def process_s3_objects(prefix):
"""Get a list of all keys in an S3 bucket."""
kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
failures = []
while_true = True
while while_true:
resp = client.list_objects_v2(**kwargs)
for obj in resp['Contents']:
try:
set_acl(obj['Key'])
except KeyError:
while_true = False
except Exception:
failures.append(obj["Key"])
continue
kwargs['ContinuationToken'] = resp['NextContinuationToken']
print ("failures :"+ failures)
def set_acl(key):
print(key)
client.put_object_acl(
ACL='bucket-owner-full-control',
Bucket=BUCKET,
Key=key
)
def get_account_canonical_id():
return client.list_buckets()["Owner"]["ID"]
process_s3_objects(sys.argv[1])
This can be only be achieved with using pipes. Try -
aws s3 ls s3://bucket/path/ --recursive | awk '{cmd="aws s3api put-object-acl --acl bucket-owner-full-control --bucket bucket --key "$4; system(cmd)}'
You will need to run the command individually for every object.
You might be able to short-cut the process by using:
aws s3 cp --acl bucket-owner-full-control --metadata Key=Value --profile <original_account_profile> s3://bucket/path s3://bucket/path
That is, you copy the files to themselves, but with the added ACL that grants permissions to the bucket owner.
If you have sub-directories, then add --recursive
.
aws s3api put-object-acl --bucket bucketname_example_3636 --key bigdirectories2_noglacier/bkpy_workspace_sda4.tar --acl bucket-owner-full-control
aws s3 ls s3://bucketname_example_3636 --recursive > listoffile.txt
sed 's/^(.*)$/aws s3api put-object-acl --bucket bucketname_example_3636 --key \1 --acl bucket-owner-full-control/g' listoffile.txt > listoffile_v2.txt;
sed '1 i\#!/bin/bash' listoffile_v2.txt > listoffile_v3.txt;
cp listoffile_v3.txt listoffile_v3.sh;
chmod u+x listoffile_v3.sh;
listoffile_v3.sh;
use python to set up the permissions recursively
#!/usr/bin/env python
import boto3
import sys
client = boto3.client('s3')
BUCKET='enter-bucket-name'
def process_s3_objects(prefix):
"""Get a list of all keys in an S3 bucket."""
kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
failures = []
while_true = True
while while_true:
resp = client.list_objects_v2(**kwargs)
for obj in resp['Contents']:
try:
print(obj['Key'])
set_acl(obj['Key'])
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
while_true = False
except Exception:
failures.append(obj["Key"])
continue
print "failures :", failures
def set_acl(key):
client.put_object_acl(
GrantFullControl="id=%s" % get_account_canonical_id,
Bucket=BUCKET,
Key=key
)
def get_account_canonical_id():
return client.list_buckets()["Owner"]["ID"]
process_s3_objects(sys.argv[1])
The other answers are ok, but the FASTEST way to do this is to use the aws s3 cp
command with the option --metadata-directive REPLACE
, like this:
aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE
This gives speeds of between 50Mib/s and 80Mib/s.
The answer from the comments from John R, which suggested to use a 'dummy' option, like --storage-class STANDARD
. Whilst this works, only gave me copy speeds between 5Mib/s and 11mb/s.
The inspiration for trying this came from AWS's support article on the subject: https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-change-anonymous-ownership/
NOTE: If you encounter 'access denied` for some of your objects, this is likely because you are using AWS creds for the bucket owning account, whereas you need to use creds for the account where the files were copied from.