S3: make a public folder private again?

前端 未结 12 1885
北恋
北恋 2021-01-30 20:16

How do you make an AWS S3 public folder private again?

I was testing out some staging data, so I made the entire folder public within a bucket. I\'d like to restrict it

相关标签:
12条回答
  • 2021-01-30 20:29

    I did this today. My situation was I had certain top level directories whose files needed to be made private. I did have some folders that needed to be left public.

    I decided to use the s3cmd like many other people have already shown. But given the massive number of files, I wanted to run parallel s3cmd jobs for each directory. And since it was going to take a day or so, I wanted to run them as background processes on an EC2 machine.

    I set up an Ubuntu machine using the t2.xlarge type. I chose the xlarge after s3cmd failed with out of memory messages on a micro instance. xlarge is probably overkill but this server will only be up for a day.

    After logging into the server, I installed and configured s3cmd:

    sudo apt-get install python-setuptools wget https://sourceforge.net/projects/s3tools/files/s3cmd/2.0.2/s3cmd-2.0.2.tar.gz/download mv download s3cmd.tar.gz tar xvfz s3cmd.tar.gz cd s3cmd-2.0.2/ python setup.py install sudo python setup.py install cd ~ s3cmd --configure

    I originally tried using screen but had some problems, mainly processes were dropping from screen -r despite running the proper screen command like screen -S directory_1 -d -m s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1. So I did some searching and found the nohup command. Here's what I ended up with:

    nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_1 > directory_1.out & nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_2 > directory_2.out & nohup s3cmd setacl --acl-private --recursive --verbose s3://my_bucket/directory_3 > directory_3.out &

    With a multi-cursor error this becomes pretty easy (I used aws s3 ls s3//my_bucket to list the directories).

    Doing that you can logout as you want, and log back in and tail any of your logs. You can tail multiple files like: tail -f directory_1.out -f directory_2.out -f directory_3.out

    So set up s3cmd then use nohup as I demonstrated and you're good to go. Have fun!

    0 讨论(0)
  • 2021-01-30 20:31

    While @Varun Chandak's answer works great, it's worth mentioning that, due to the awk part, the script only accounts for the last part of the ls results. If the filename has spaces in it, awk will get only the last segment of the filename split by spaces, not the entire filename.

    Example: A file with a path like folder1/subfolder1/this is my file.txt would result in an entry called just file.txt.

    In order to prevent that while still using his script, you'd have to replace $NF in awk {print $NF} by a sequence of variable placeholders that accounts for the number of segments that the 'split by space' operation would result in. Since filenames might have a quite large number of spaces in their names, I've gone with an exaggeration, but to be honest, I think a completely new approach would probably be better to deal with these cases. Here's the updated code:

    #!/bin/sh
    aws s3 ls --recursive s3://my-bucket-name | awk '{print $4,$5,$6,$7,$8,$9,$10,$11,$12,$13,$14,$15,$16,$17,$18,$19,$20,$21,$22,$23,$24,$25}' | while read line; do
        echo "$line"
        aws s3api put-object-acl --acl private --bucket my-bucket-name --key "$line"
    done
    

    I should also mention that using cut didn't have any results for me, so I removed it. Credits still go to @Varun Chandak, since he built the script.

    0 讨论(0)
  • 2021-01-30 20:36

    I actually used Amazon's UI following this guide http://aws.amazon.com/articles/5050/

    although it looks somewhat different than that guide

    0 讨论(0)
  • 2021-01-30 20:43

    If you want a delightfully simple one-liner, you can use the AWS Powershell Tools. The reference for the AWS Powershell Tools can be found here. We'll be using the Get-S3Object and Set-S3ACL commandlets.

    $TargetS3Bucket = "myPrivateBucket"
    $TargetDirectory = "accidentallyPublicDir"
    $TargetRegion = "us-west-2"
    
    Set-DefaultAWSRegion $TargetRegion
    
    Get-S3Object -BucketName $TargetS3Bucket -KeyPrefix $TargetDirectory | Set-S3ACL -CannedACLName private
    
    0 讨论(0)
  • 2021-01-30 20:43

    There are two ways to manage this:

    1. Block all the bucket (simplier but does not applies to all use cases like a s3 bucket with static website and a sub folder for CDN) - https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/
    2. Block access to a directory from the s3 bucket that was granted Make Public option where you can execute the script from ascobol (I just rewrite it with boto3)
    #!/usr/bin/env python
    #remove public read right for all keys within a directory
    
    #usage: remove_public.py bucketName folderName
    
    import sys
    import boto3
    
    BUCKET = sys.argv[1]
    PATH = sys.argv[2]
    s3client = boto3.client("s3")
    paginator = s3client.get_paginator('list_objects_v2')
    page_iterator = paginator.paginate(Bucket=BUCKET, Prefix=PATH)
    for page in page_iterator:
        keys = page['Contents']
        for k in keys:
            response = s3client.put_object_acl(
                            ACL='private',
                            Bucket=BUCKET,
                            Key=k['Key']
                        )
    

    cheers

    0 讨论(0)
  • 2021-01-30 20:44

    It looks like that this is now addressed by Amazon:

    Selecting the following checkbox makes the bucket and its contents private again:

    Block public and cross-account access if bucket has public policies

    https://aws.amazon.com/blogs/aws/amazon-s3-block-public-access-another-layer-of-protection-for-your-accounts-and-buckets/

    0 讨论(0)
提交回复
热议问题