How to get more than 1000 objects from S3 by using list_objects_v2?

泪湿孤枕 提交于 2020-08-22 03:02:35

问题


I have more than 500,000 objects on s3. I am trying get the size of each object. I am using the following python code for that

import boto3

bucket = 'bucket'
prefix = 'prefix'

contents = boto3.client('s3').list_objects_v2(Bucket=bucket,  MaxKeys=1000, Prefix=prefix)["Contents"]

for c in contents:
    print(c["Size"])

But it just gave me the size of top 1000 objects. Based on the documentation we can't get more 1000. Is there any way I can get more than that?


回答1:


Use the ContinuationToken returned in the response as a parameter for subsequent calls, until the IsTruncated value returned in the response is false.

This can be factored into a neat generator function:

def get_all_s3_objects(s3, **base_kwargs):
    continuation_token = None
    while True:
        list_kwargs = dict(MaxKeys=1000, **base_kwargs)
        if continuation_token:
            list_kwargs['ContinuationToken'] = continuation_token
        response = s3.list_objects_v2(**list_kwargs)
        yield from response.get('Contents', [])
        if not response.get('IsTruncated'):  # At the end of the list?
            break
        continuation_token = response.get('NextContinuationToken')

for file in get_all_s3_objects(boto3.client('s3'), Bucket=bucket, Prefix=prefix):
    print(file['size'])



回答2:


The inbuilt boto3 Paginator class is the easiest way to overcome the 1000 record limitation of list-objects-v2. This can be implemented as follows

s3 = boto3.client('s3')

paginator = s3.get_paginator('list_objects_v2')
pages = paginator.paginate(Bucket='bucket', Prefix='prefix')

for page in pages:
    for obj in page['Contents']:
        print(obj['Size'])

For more details: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#S3.Paginator.ListObjectsV2




回答3:


If you don't NEED to use the boto3.client you can use boto3.resource to get a complete list of your files:

s3r = boto3.resource('s3')
bucket = s3r.Bucket('bucket_name')
files_in_bucket = list(bucket.objects.all())

Then to get the size just:

sizes = [f.size for f in files_in_bucket]

Depending on the size of your bucket this might take a minute.



来源:https://stackoverflow.com/questions/54314563/how-to-get-more-than-1000-objects-from-s3-by-using-list-objects-v2

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!