Python, using multiprocess is slower than not using it

前端 未结 4 1008
情话喂你
情话喂你 2020-12-29 13:04

After spending a lot of time trying to wrap my head around multiprocessing I came up with this code which is a benchmark test:

Example 1:

         


        
相关标签:
4条回答
  • 2020-12-29 13:24

    This thread has been very useful!

    Just a quick observation over the good second code provided by David Robinson above (answered Jan 8 '12 at 5:34), which was the code more suitable to my current needs.

    In my case I had previous records of the running times of a target function without multiprocessing. When using his code to implement a multiprocessing function his timefunc(multi) didn't reflect the actual time of multi, and it rather appeared to reflect the time expended in the parent.

    What i did was to externalise the timing function and the time that I got looked more like expected:

     start = timefunc()
     multi()/single()
     elapsed = (timefunc()-start)/(--number of workers--)
     print(elapsed)
    

    In my case with a double core the total time carried out by 'x' workers using the target function was twice faster than running a simple for-loop over the target function with 'x' iterations.

    I am new to multiprocessing so please be cautious with this observation though.

    0 讨论(0)
  • 2020-12-29 13:26

    ETA: Now that you've posted your code, I can tell you there is a simple way to do what you're doing MUCH faster (>100 times faster).

    I see that what you're doing is adding a frequency in parentheses to each item in a list of strings. Instead of counting all the elements each time (which, as you can confirm using cProfile, is by far the largest bottleneck in your code), you can just create a dictionary that maps from each element to its frequency. That way, you only have to go through the list twice- once to create the frequency dictionary, once to use it to add frequency.

    Here I'll show my new method, time it, and compare it to the old method using a generated test case. The test case even shows the new result to be exactly identical to the old one. Note: All you really need to pay attention to below is the new_method.

    import random
    import time
    import collections
    import cProfile
    
    LIST_LEN = 14000
    
    def timefunc(f):
        t = time.time()
        f()
        return time.time() - t
    
    
    def random_string(length=3):
        """Return a random string of given length"""
        return "".join([chr(random.randint(65, 90)) for i in range(length)])
    
    
    class Profiler:
        def __init__(self):
            self.original = [[random_string() for i in range(LIST_LEN)]
                                for j in range(4)]
    
        def old_method(self):
            self.ListVar = self.original[:]
            for b in range(len(self.ListVar)):
                self.list1 = []
                self.temp = []
                for n in range(len(self.ListVar[b])):
                    if not self.ListVar[b][n] in self.temp:
                        self.list1.insert(n, self.ListVar[b][n] + '(' +    str(self.ListVar[b].count(self.ListVar[b][n])) + ')')
                        self.temp.insert(0, self.ListVar[b][n])
    
                self.ListVar[b] = list(self.list1)
            return self.ListVar
    
        def new_method(self):
            self.ListVar = self.original[:]
            for i, inner_lst in enumerate(self.ListVar):
                freq_dict = collections.defaultdict(int)
                # create frequency dictionary
                for e in inner_lst:
                    freq_dict[e] += 1
                temp = set()
                ret = []
                for e in inner_lst:
                    if e not in temp:
                        ret.append(e + '(' + str(freq_dict[e]) + ')')
                        temp.add(e)
                self.ListVar[i] = ret
            return self.ListVar
    
        def time_and_confirm(self):
            """
            Time the old and new methods, and confirm they return the same value
            """
            time_a = time.time()
            l1 = self.old_method()
            time_b = time.time()
            l2 = self.new_method()
            time_c = time.time()
    
            # confirm that the two are the same
            assert l1 == l2, "The old and new methods don't return the same value"
    
            return time_b - time_a, time_c - time_b
    
    p = Profiler()
    print p.time_and_confirm()
    

    When I run this, it gets times of (15.963812112808228, 0.05961179733276367), meaning it's about 250 times faster, though this advantage depends on both how long the lists are and the frequency distribution within each list. I'm sure you'll agree that with this speed advantage, you probably won't need to use multiprocessing :)

    (My original answer is left in below for posterity)

    ETA: By the way, it is worth noting that this algorithm is roughly linear in the length of the lists, while the code you used is quadratic. This means it performs with even more of an advantage the larger the number of elements. For example, if you increase the length of each list to 1000000, it takes only 5 seconds to run. Based on extrapolation, the old code would take over a day :)


    It depends on the operation you are performing. For example:

    import time
    NUM_RANGE = 100000000
    
    from multiprocessing  import Process
    
    def timefunc(f):
        t = time.time()
        f()
        return time.time() - t
    
    def multi():
        class MultiProcess(Process):
            def __init__(self):
                Process.__init__(self)
    
            def run(self):
                # Alter string + test processing speed
                for i in xrange(NUM_RANGE):
                    a = 20 * 20
    
        thread1 = MultiProcess()
        thread2 = MultiProcess()
        thread1.start()
        thread2.start()
        thread1.join()
        thread2.join()
    
    def single():
        for i in xrange(NUM_RANGE):
            a = 20 * 20
    
        for i in xrange(NUM_RANGE):
            a = 20 * 20
    
    print timefunc(multi) / timefunc(single)
    

    On my machine, the multiprocessed operation takes up only ~60% the time of the singlethreaded one.

    0 讨论(0)
  • 2020-12-29 13:29

    Multiprocessing could be useful for what you're doing, but not in the way you're thinking about using it. As you're basically doing some computation on every member of a list, you could do it using the multiprocessing.Pool.map method, to do the computation on the list members in parallel.

    Here is an example that shows your code's performance using a single process and using multiprocessing.Pool.map:

    from multiprocessing import Pool
    from random import choice
    from string import printable
    from time import time
    
    def build_test_list():
        # Builds a test list consisting of 5 sublists of 10000 strings each.
        # each string is 20 characters long
        testlist = [[], [], [], [], []]
        for sublist in testlist:
            for _ in xrange(10000):
                sublist.append(''.join(choice(printable) for _ in xrange(20)))
        return testlist
    
    def process_list(l):
        # the time-consuming code
        result = []
        tmp = []
        for n in range(len(l)):
            if l[n] not in tmp:
                result.insert(n, l[n]+' ('+str(l.count(l[n]))+')')
                tmp.insert(0, l[n])
        return result
    
    def single(l):
        # process the test list elements using a single process
        results = []
        for sublist in l:
            results.append(process_list(sublist))
        return results
    
    def multi(l):
        # process the test list elements in parallel
        pool = Pool()
        results = pool.map(process_list, l)
        return results
    
    print "Building the test list..."
    testlist = build_test_list()
    
    print "Processing the test list using a single process..."
    starttime = time()
    singleresults = single(testlist)
    singletime = time() - starttime
    
    print "Processing the test list using multiple processes..."
    starttime = time()
    multiresults = multi(testlist)
    multitime = time() - starttime
    
    # make sure they both return the same thing
    assert singleresults == multiresults
    
    print "Single process: {0:.2f}sec".format(singletime)
    print "Multiple processes: {0:.2f}sec".format(multitime)
    

    Output:

    Building the test list...
    Processing the test list using a single process...
    Processing the test list using multiple processes...
    Single process: 34.73sec
    Multiple processes: 24.97sec
    
    0 讨论(0)
  • 2020-12-29 13:43

    This example is too small to benefit from multiprocessing.

    There's a LOT of overhead when starting a new process. If there were heavy processing involved, it would be negligable. But your example really isn't all that intensive, and so you're bound to notice the overhead.

    You'd probably notice a bigger difference with real threads, too bad python (well, CPython) has issues with CPU-bound threading.

    0 讨论(0)
提交回复
热议问题