Multiple Inputs with MRJob

前端 未结 5 896
失恋的感觉
失恋的感觉 2020-12-29 14:58

I\'m trying to learn to use Yelp\'s Python API for MapReduce, MRJob. Their simple word counter example makes sense, but I\'m curious how one would handle an application invo

5条回答
  •  隐瞒了意图╮
    2020-12-29 15:37

    If you're in need of processing your raw data against another (or same row_i, row_j) data set, you can either:

    1) Create an S3 bucket to store a copy of your data. Pass the location of this copy to your task class, e.g. self.options.bucket and self.options.my_datafile_copy_location in the code below. Caveat: Unfortunately, it seems that the whole file must get "downloaded" to the task machines before getting processed. If the connections falters or takes too long to load, this job may fail. Here is some Python/MRJob code to do this.

    Put this in your mapper function:

    d1 = line1.split('\t', 1)
    v1, col1 = d1[0], d1[1]
    conn = boto.connect_s3(aws_access_key_id=, aws_secret_access_key=)
    bucket = conn.get_bucket(self.options.bucket)  # bucket = conn.get_bucket(MY_UNIQUE_BUCKET_NAME_AS_STRING)
    data_copy = bucket.get_key(self.options.my_datafile_copy_location).get_contents_as_string().rstrip()
    ### CAVEAT: Needs to get the whole file before processing the rest.
    for line2 in data_copy.split('\n'):
        d2 = line2.split('\t', 1)
        v2, col2 = d2[0], d2[1]
        ## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
        yield 
    conn.close()
    

    2) Create a SimpleDB domain, and store all of your data in there. Read here on boto and SimpleDB: http://code.google.com/p/boto/wiki/SimpleDbIntro

    Your mapper code would look like this:

    dline = dline.strip()
    d0 = dline.split('\t', 1)
    v1, c1 = d0[0], d0[1]
    sdb = boto.connect_sdb(aws_access_key_id=, aws_secret_access_key=)
    domain = sdb.get_domain(MY_DOMAIN_STRING_NAME)
    for item in domain:
        v2, c2 = item.name, item['column']
        ## Now, insert code to do any operations between v1 and v2 (or c1 and c2) here:
        yield 
    sdb.close()
    

    This second option may perform better if you have very large amounts of data, since it can make the requests for each row of data rather than the whole amount at once. Keep in mind that SimpleDB values can only be a maximum of 1024 characters long, so you may need to compress/decompress via some method if your data values are longer than that.

提交回复
热议问题