Celery Group task for use in a map/reduce workflow

前端 未结 1 1990
不知归路
不知归路 2021-02-13 16:16

Can I use a Celery Group primitive as the umbrella task in a map/reduce workflow?

Or more specific: Can the subtasks in a Group be run on multiple workers on multiple

相关标签:
1条回答
  • 2021-02-13 16:39

    I found out it is possible to use Chords for such a map reduce like problem.

    @celery.task(name='ic.mapper')
    def mapper():
        #split your problem in embarrassingly parallel maps 
        maps = [map.s(), map.s(), map.s(), map.s(), map.s(), map.s(), map.s(), map.s()]
        #and put them in a chord that executes them in parallel and after they finish calls 'reduce'
        mapreduce = celery.chord(maps)(reduce.s())    
        return "{0} mapper ran on {1}".format(celery.current_task.request.id, celery.current_task.request.hostname)
    
    @celery.task(name='ic.map')
    def map():
        #do something useful here
        import time
        time.sleep(10.0)
        return "{0} map ran on {1}".format(celery.current_task.request.id, celery.current_task.request.hostname)
    
    @celery.task(name='ic.reduce')
    def reduce(results):
        #put the maps together and do something with the results
        return "{0} reduce ran on {1}".format(celery.current_task.request.id, celery.current_task.request.hostname)
    

    When the mapper is executed on a cluster of three workers/servers it first executes the mapper which splits your problem and the creates new subtasks that are again submitted to the broker. These run in parallel because the queue is consumed by all brokers. Also an chord task is created that polls all maps to see if they have finished. When done the reduce task is executed where you can glue your results back together.

    In all: yes it is possible. Thanks for the vegetable guys!

    0 讨论(0)
提交回复
热议问题