问题
Using Boto3 Python SDK
, I was able to download files using the method bucket.download_file()
Is there a way to download an entire folder?
回答1:
quick and dirty but it works:
import boto3
import os
def downloadDirectoryFroms3(bucketName, remoteDirectoryName):
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket(bucketName)
for obj in bucket.objects.filter(Prefix = remoteDirectoryName):
if not os.path.exists(os.path.dirname(obj.key)):
os.makedirs(os.path.dirname(obj.key))
bucket.download_file(obj.key, obj.key) # save to same path
Assuming you want to download the directory foo/bar from s3 then the for-loop will iterate all the files whose path starts with the Prefix=foo/bar.
回答2:
A slightly less dirty modification of the accepted answer by Konstantinos Katsantonis:
import boto3
s3 = boto3.resource('s3') # assumes credentials & configuration are handled outside python in .aws directory or environment variables
def download_s3_folder(bucket_name, s3_folder, local_dir=None):
"""
Download the contents of a folder directory
Args:
bucket_name: the name of the s3 bucket
s3_folder: the folder path in the s3 bucket
local_dir: a relative or absolute directory path in the local file system
"""
bucket = s3.Bucket(bucket_name)
for obj in bucket.objects.filter(Prefix=s3_folder):
target = obj.key if local_dir is None \
else os.path.join(local_dir, os.path.relpath(obj.key, s3_folder))
if not os.path.exists(os.path.dirname(target)):
os.makedirs(os.path.dirname(target))
if obj.key[-1] == '/':
continue
bucket.download_file(obj.key, target)
This downloads nested subdirectories, too. I was able to download a directory with over 3000 files in it. You'll find other solutions at Boto3 to download all files from a S3 Bucket, but I don't know if they're any better.
回答3:
Using boto3
you can set aws credentials and download dataset from S3
import boto3
import os
# set aws credentials
s3r = boto3.resource('s3', aws_access_key_id='xxxxxxxxxxxxxxxxx',
aws_secret_access_key='xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx')
bucket = s3r.Bucket('bucket_name')
# downloading folder
prefix = 'dirname'
for object in bucket.objects.filter(Prefix = 'dirname'):
if object.key == prefix:
os.makedirs(os.path.dirname(object.key), exist_ok=True)
continue;
bucket.download_file(object.key, object.key)
If you cannot find ur access_key
and secret_access_key
, refer to this page
I hope it will helps.
thank you.
来源:https://stackoverflow.com/questions/49772151/download-a-folder-from-s3-using-boto3