Copy multiple files from s3 bucket

后端 未结 6 609
庸人自扰
庸人自扰 2021-02-01 15:00

I am having trouble downloading multiple files from AWS S3 buckets to my local machine.

I have all the filenames that I want to download and I do not want others. How c

相关标签:
6条回答
  • 2021-02-01 15:27

    As per the doc you can use include and exclude filters with s3 cp as well. So you can do something like this:

    aws s3 cp s3://bucket/folder/ . --recursive --exclude="*" --include="2017-12-20*"
    

    Make sure you get the order of exclude and include filters right as that could change the whole meaning.

    0 讨论(0)
  • 2021-02-01 15:29

    I got the problem solved, may be a little bit stupid, but it works.

    Using python, I write multiple line of AWS download commands on one single .sh file, then I execute it on the terminal.

    0 讨论(0)
  • 2021-02-01 15:35

    Also one can use the --recursive option, as described in the documentation for cp command. It will copy all objects under a specified prefix recursively.

    Example:

    aws s3 cp s3://folder1/folder2/folder3 . --recursive
    

    will grab all files under folder1/folder2/folder3 and copy them to local directory.

    0 讨论(0)
  • 2021-02-01 15:37

    You might want to use "sync" instead of "cp". The following will download/sync only the files with the ".txt" extension in your local folder:

    aws s3 sync --exclude="*" --include="*.txt" s3://mybucket/mysubbucket .
    
    0 讨论(0)
  • 2021-02-01 15:39

    Tried all the above. Not much joy. Finally, adapted @rajan's reply into a one-liner:

    for file in whatever*.txt; do { aws s3 cp $file s3://somewhere/in/my/bucket/; } done
    
    0 讨论(0)
  • 2021-02-01 15:42

    There is a bash script which can read all the filenames from a file filename.txt.

    #!/bin/bash  
    set -e  
    while read line  
    do  
      aws s3 cp s3://bucket-name/$line dest-path/  
    done <filename.txt
    
    0 讨论(0)
提交回复
热议问题