Amazon S3 console: download multiple files at once

后端 未结 15 1267
傲寒
傲寒 2021-01-31 07:00

When I log to my S3 console I am unable to download multiple selected files (the WebUI allows downloads only when one file is selected):

https://console

相关标签:
15条回答
  • 2021-01-31 07:22

    I have done, by creating shell script using aws cli (i.e : example.sh)

    #!/bin/bash
    aws s3 cp s3://s3-bucket-path/example1.pdf LocalPath/Download/example1.pdf
    aws s3 cp s3://s3-bucket-path/example2.pdf LocalPath/Download/example2.pdf
    

    give executable rights to example.sh (i.e sudo chmod 777 example.sh)

    then run your shell script ./example.sh

    0 讨论(0)
  • 2021-01-31 07:23

    I wrote a simple shell script to download NOT JUST all files but also all versions of every file from a specific folder under AWS s3 bucket. Here it is & you may find it useful

    # Script generates the version info file for all the 
    # content under a particular bucket and then parses 
    # the file to grab the versionId for each of the versions
    # and finally generates a fully qualified http url for
    # the different versioned files and use that to download 
    # the content.
    
    s3region="s3.ap-south-1.amazonaws.com"
    bucket="your_bucket_name"
    # note the location has no forward slash at beginning or at end
    location="data/that/you/want/to/download"
    # file names were like ABB-quarterly-results.csv, AVANTIFEED--quarterly-results.csv
    fileNamePattern="-quarterly-results.csv"
    
    # AWS CLI command to get version info
    content="$(aws s3api list-object-versions --bucket $bucket --prefix "$location/")"
    #save the file locally, if you want
    echo "$content" >> version-info.json
    versions=$(echo "$content" | grep -ir VersionId  | awk -F ":" '{gsub(/"/, "", $3);gsub(/,/, "", $3);gsub(/ /, "", $3);print $3 }')
    for version in $versions
    do
        echo ############### $fileId ###################
        #echo $version
        url="https://$s3region/$bucket/$location/$fileId$fileNamePattern?versionId=$version"
        echo $url
        content="$(curl -s "$url")"
        echo "$content" >> $fileId$fileNamePattern-$version.csv
        echo ############### $i ###################
    done
    
    0 讨论(0)
  • 2021-01-31 07:25

    You could also use CyberDuck. It works pretty well with S3 and you can download a folder.

    0 讨论(0)
  • 2021-01-31 07:27

    The S3 service has no meaningful limits on simultaneous downloads (easily several hundred downloads at a time are possible) and there is no policy setting related to this... but the S3 console only allows you to select one file for downloading at a time.

    Once the download starts, you can start another and another, as many as your browser will let you attempt simultaneously.

    0 讨论(0)
  • 2021-01-31 07:27

    If you have Visual Studio with the AWS Explorer extension installed, you can also browse to Amazon S3 (step 1), select your bucket (step 2), select al the files you want to download (step 3) and right click to download them all (step 4).

    0 讨论(0)
  • 2021-01-31 07:28

    If you use AWS CLI, you can use the exclude along with --include and --recursive flags to accomplish this

    aws s3 cp s3://path/to/bucket/ . --recursive --exclude "*" --include "things_you_want"
    

    Eg.

    --exclude "*" --include "*.txt"
    

    will download all files with .txt extension. More details - https://docs.aws.amazon.com/cli/latest/reference/s3/

    0 讨论(0)
提交回复
热议问题