Multiprocessing in python to speed up functions

梦想的初衷 提交于 2020-01-01 16:56:08

问题


I am confused with Python multiprocessing.

I am trying to speed up a function which process strings from a database but I must have misunderstood how multiprocessing works because the function takes longer when given to a pool of workers than with “normal processing”.

Here an example of what I am trying to achieve.

from time import clock, time
from multiprocessing import Pool, freeze_support

from random import choice


def foo(x):
    TupWerteMany = []
    for i in range(0,len(x)):
         TupWerte = []
          s = list(x[i][3])
          NewValue = choice(s)+choice(s)+choice(s)+choice(s)
          TupWerte.append(NewValue)
          TupWerte = tuple(TupWerte)

          TupWerteMany.append(TupWerte)
     return TupWerteMany



 if __name__ == '__main__':
     start_time = time()
     List = [(u'1', u'aa', u'Jacob', u'Emily'),
        (u'2', u'bb', u'Ethan', u'Kayla')]
     List1 = List*1000000

     # METHOD 1 : NORMAL (takes 20 seconds) 
     x2 = foo(List1)
     print x2[1:3]

     # METHOD 2 : APPLY_ASYNC (takes 28 seconds)
     #    pool = Pool(4)
     #    Werte = pool.apply_async(foo, args=(List1,))
     #    x2 = Werte.get()
     #    print '--------'
     #    print x2[1:3]
     #    print '--------'

     # METHOD 3: MAP (!! DOES NOT WORK !!)

     #    pool = Pool(4)
     #    Werte = pool.map(foo, args=(List1,))
     #    x2 = Werte.get()
     #    print '--------'
     #    print x2[1:3]
     #    print '--------'


     print 'Time Elaspse: ', time() - start_time

My questions:

  1. Why does apply_async takes longer than the “normal way” ?
  2. What I am doing wrong with map?
  3. Does it makes sense to speed up such tasks with multiprocessing at all?
  4. Finally: after all I have read here, I am wondering if multiprocessing in python works on windows at all ?

回答1:


So your first problem is that there is no actual parallelism happening in foo(x), you are passing the entire list to the function once.

1) The idea of a process pool is to have many processes doing computations on separate bits of some data.

 # METHOD 2 : APPLY_ASYNC
 jobs = 4
 size = len(List1)
 pool = Pool(4)
 results = []
 # split the list into 4 equally sized chunks and submit those to the pool
 heads = range(size/jobs, size, size/jobs) + [size]
 tails = range(0,size,size/jobs)
 for tail,head in zip(tails, heads):
      werte = pool.apply_async(foo, args=(List1[tail:head],))
      results.append(werte)

 pool.close()
 pool.join() # wait for the pool to be done

 for result in results:
      werte = result.get() # get the return value from the sub jobs

This will only give you an actual speedup if the time it takes to process each chunk is greater than the time it takes to launch the process, in the case of four processes and four jobs to be done, of course these dynamics change if you've got 4 processes and 100 jobs to be done. Remember that you are creating a completely new python interpreter four times, this isn't free.

2) The problem you have with map is that it applies foo to EVERY element in List1 in a separate process, this will take quite a while. So if you're pool has 4 processes map will pop an item of the list four times and send it to a process to be dealt with - wait for process to finish - pop some more stuff of the list - wait for the process to finish. This makes sense only if processing a single item takes a long time, like for instance if every item is a file name pointing to a one gigabyte text file. But as it stands map will just take a single string of the list and pass it to foo where as apply_async takes a slice of the list. Try the following code

def foo(thing):
    print thing

map(foo, ['a','b','c','d'])

That's the built-in python map and will run a single process, but the idea is exactly the same for the multiprocess version.

Added as per J.F.Sebastian's comment: You can however use the chunksize argument to map to specify an approximate size of for each chunk.

pool.map(foo, List1, chunksize=size/jobs) 

I don't know though if there is a problem with map on Windows as I don't have one available for testing.

3) yes, given that your problem is big enough to justify forking out new python interpreters

4) can't give you a definitive answer on that as it depends on the number of cores/processors etc. but in general it should be fine on Windows.




回答2:


On question (2) With the guidance of Dougal and Matti, I figured out what's went wrong. The original foo function processes a list of lists, while map requires a function to process single elements.

The new function should be

def foo2 (x):
    TupWerte = []
    s = list(x[3])
    NewValue = choice(s)+choice(s)+choice(s)+choice(s)
    TupWerte.append(NewValue)
    TupWerte = tuple(TupWerte)
    return TupWerte

and the block to call it :

jobs = 4
size = len(List1)
pool = Pool()
#Werte = pool.map(foo2, List1, chunksize=size/jobs)
Werte = pool.map(foo2, List1)
pool.close()
print Werte[1:3]

Thanks to all of you who helped me understand this.

Results of all methods: for List * 2 Mio records: normal 13.3 seconds , parallel with async: 7.5 seconds, parallel with with map with chuncksize : 7.3, without chunksize 5.2 seconds




回答3:


Here's a generic multiprocessing template if you are interested.

import multiprocessing as mp
import time

def worker(x):
    time.sleep(0.2)
    print "x= %s, x squared = %s" % (x, x*x)
    return x*x

def apply_async():
    pool = mp.Pool()
    for i in range(100):
        pool.apply_async(worker, args = (i, ))
    pool.close()
    pool.join()

if __name__ == '__main__':
    apply_async()

And the output looks like this:

x= 0, x squared = 0
x= 1, x squared = 1
x= 2, x squared = 4
x= 3, x squared = 9
x= 4, x squared = 16
x= 6, x squared = 36
x= 5, x squared = 25
x= 7, x squared = 49
x= 8, x squared = 64
x= 10, x squared = 100
x= 11, x squared = 121
x= 9, x squared = 81
x= 12, x squared = 144

As you can see, the numbers are not in order, as they are being executed asynchronously.



来源:https://stackoverflow.com/questions/12116004/multiprocessing-in-python-to-speed-up-functions

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!