How to change permission recursively to folder with AWS s3 or AWS s3api

后端 未结 7 1357
情歌与酒
情歌与酒 2020-12-06 02:28

I am trying to grant permissions to an existing account in s3.

The bucket is owned by the account, but the data was copied from another account\'s bucket.

Wh

相关标签:
7条回答
  • 2020-12-06 02:30

    The python code is more efficient this way, otherwise it takes a lot longer.

    import boto3
    import sys
    
    client = boto3.client('s3')
    BUCKET='mybucket'
    
    def process_s3_objects(prefix):
        """Get a list of all keys in an S3 bucket."""
        kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
        failures = []
        while_true = True
        while while_true:
          resp = client.list_objects_v2(**kwargs)
          for obj in resp['Contents']:
            try:
                set_acl(obj['Key'])
            except KeyError:
                while_true = False
            except Exception:
                failures.append(obj["Key"])
                continue
          kwargs['ContinuationToken'] = resp['NextContinuationToken']
        print ("failures :"+ failures)
    
    def set_acl(key):
      print(key)
      client.put_object_acl(
        ACL='bucket-owner-full-control',
        Bucket=BUCKET,
        Key=key
    )
    
    def get_account_canonical_id():
      return client.list_buckets()["Owner"]["ID"]
    
    
    process_s3_objects(sys.argv[1])
    
    0 讨论(0)
  • 2020-12-06 02:33

    This can be only be achieved with using pipes. Try -

    aws s3 ls s3://bucket/path/ --recursive | awk '{cmd="aws s3api put-object-acl --acl bucket-owner-full-control --bucket bucket --key "$4; system(cmd)}'
    
    0 讨论(0)
  • 2020-12-06 02:41

    You will need to run the command individually for every object.

    You might be able to short-cut the process by using:

    aws s3 cp --acl bucket-owner-full-control --metadata Key=Value --profile <original_account_profile> s3://bucket/path s3://bucket/path
    

    That is, you copy the files to themselves, but with the added ACL that grants permissions to the bucket owner.

    If you have sub-directories, then add --recursive.

    0 讨论(0)
  • 2020-12-06 02:44

    THE MAIN COMMAND IS THIS.

    WHERE bucketname_example_3636 IS YOUR BUCKET NAME.

    aws s3api put-object-acl --bucket bucketname_example_3636 --key bigdirectories2_noglacier/bkpy_workspace_sda4.tar --acl bucket-owner-full-control

    MY IDEA IS TO CREATE A SCRIPT WITH SED, EASLY.

    1. GET THE LIST OF THE KEYS;

    aws s3 ls s3://bucketname_example_3636 --recursive > listoffile.txt

    2. SAY YOU HAVE 1000 FILES, SO 1000 KEYS;

    WITH SED CREATE AUTOMATICALLY 1000 COMMANDS;

    THE STRING \1 IS YOUR KEY;

    sed 's/^(.*)$/aws s3api put-object-acl --bucket bucketname_example_3636 --key \1 --acl bucket-owner-full-control/g' listoffile.txt > listoffile_v2.txt;

    3. ADD THE SHEBANG LINE NECESSARY TO CONVERT A TEXTUAL FILE TO BASH SCRIPT;

    sed '1 i\#!/bin/bash' listoffile_v2.txt > listoffile_v3.txt;

    4. NOW JUST CHANGE THE FILE EXTENTION;

    cp listoffile_v3.txt listoffile_v3.sh;

    NOW YOU HAVE A SCRIPT;

    MAKE THE SCRIPT EXECUTABLE;

    chmod u+x listoffile_v3.sh;

    RUN THE SCRIPT

    listoffile_v3.sh;

    0 讨论(0)
  • 2020-12-06 02:48

    use python to set up the permissions recursively

    #!/usr/bin/env python
    import boto3
    import sys
    
    client = boto3.client('s3')
    BUCKET='enter-bucket-name'
    
    def process_s3_objects(prefix):
        """Get a list of all keys in an S3 bucket."""
        kwargs = {'Bucket': BUCKET, 'Prefix': prefix}
        failures = []
        while_true = True
        while while_true:
          resp = client.list_objects_v2(**kwargs)
          for obj in resp['Contents']:
            try:
                print(obj['Key'])
                set_acl(obj['Key'])
                kwargs['ContinuationToken'] = resp['NextContinuationToken']
            except KeyError:
                while_true = False
            except Exception:
                failures.append(obj["Key"])
                continue
    
        print "failures :", failures
    
    def set_acl(key):
      client.put_object_acl(     
        GrantFullControl="id=%s" % get_account_canonical_id,
        Bucket=BUCKET,
        Key=key
    )
    
    def get_account_canonical_id():
      return client.list_buckets()["Owner"]["ID"]
    
    
    process_s3_objects(sys.argv[1])
    
    0 讨论(0)
  • 2020-12-06 02:48

    The other answers are ok, but the FASTEST way to do this is to use the aws s3 cp command with the option --metadata-directive REPLACE, like this:

    aws s3 cp --recursive --acl bucket-owner-full-control s3://bucket/folder s3://bucket/folder --metadata-directive REPLACE

    This gives speeds of between 50Mib/s and 80Mib/s.

    The answer from the comments from John R, which suggested to use a 'dummy' option, like --storage-class STANDARD. Whilst this works, only gave me copy speeds between 5Mib/s and 11mb/s.

    The inspiration for trying this came from AWS's support article on the subject: https://aws.amazon.com/premiumsupport/knowledge-center/s3-object-change-anonymous-ownership/

    NOTE: If you encounter 'access denied` for some of your objects, this is likely because you are using AWS creds for the bucket owning account, whereas you need to use creds for the account where the files were copied from.

    0 讨论(0)
提交回复
热议问题