Fastest way to download 3 million objects from a S3 bucket

前端 未结 2 1005
谎友^
谎友^ 2021-01-30 13:41

I\'ve tried using Python + boto + multiprocessing, S3cmd and J3tset but struggling with all of them.

Any suggestions, perhaps a ready-made script you\'ve been using or a

相关标签:
2条回答
  • 2021-01-30 14:26

    Okay, I figured out a solution based on @Matt Billenstien's hint. It uses eventlet library. The first step is most important here (monkey patching of standard IO libraries).

    Run this script in the background with nohup and you're all set.

    from eventlet import *
    patcher.monkey_patch(all=True)
    
    import os, sys, time
    from boto.s3.connection import S3Connection
    from boto.s3.bucket import Bucket
    
    import logging
    
    logging.basicConfig(filename="s3_download.log", level=logging.INFO)
    
    
    def download_file(key_name):
        # Its imp to download the key from a new connection
        conn = S3Connection("KEY", "SECRET")
        bucket = Bucket(connection=conn, name="BUCKET")
        key = bucket.get_key(key_name)
    
        try:
            res = key.get_contents_to_filename(key.name)
        except:
            logging.info(key.name+":"+"FAILED")
    
    if __name__ == "__main__":
        conn = S3Connection("KEY", "SECRET")
        bucket = Bucket(connection=conn, name="BUCKET")
    
        logging.info("Fetching bucket list")
        bucket_list = bucket.list(prefix="PREFIX")
    
        logging.info("Creating a pool")
        pool = GreenPool(size=20)
    
        logging.info("Saving files in bucket...")
        for key in bucket.list():
            pool.spawn_n(download_file, key.key)
        pool.waitall()
    
    0 讨论(0)
  • 2021-01-30 14:48

    Use eventlet to give you I/O parallelism, write a simple function to download one object using urllib, then use a GreenPile to map that to a list of input urls -- a pile with 50 to 100 greenlets should do...

    0 讨论(0)
提交回复
热议问题