Python: Executing multiple functions simultaneously

后端 未结 5 2020
借酒劲吻你
借酒劲吻你 2020-11-27 04:05

I\'m trying to run two functions simultaneously in Python. I have tried the below code which uses multiprocessing but when I execute the code, the second functi

相关标签:
5条回答
  • 2020-11-27 04:28

    This is just what i needed. I know it wasn't asked but i modified shashank's code to suit Python 3 for anyone else looking :)

    from multiprocessing import Process
    import sys
    
    rocket = 0
    
    def func1():
        global rocket
        print ('start func1')
        while rocket < sys.maxsize:
            rocket += 1
        print ('end func1')
    
    def func2():
        global rocket
        print ('start func2')
        while rocket < sys.maxsize:
            rocket += 1
        print ('end func2')
    
    if __name__=='__main__':
        p1 = Process(target=func1)
        p1.start()
        p2 = Process(target=func2)
        p2.start()
    

    Substitute sys.maxsize for an number then print(rocket)and you can see it count up one at a time. Get to a number and stop

    0 讨论(0)
  • 2020-11-27 04:28

    This is a very good example by @Shashank. I just want to say that I had to add join at the end, or else the two processes were not running simultaneously:

    from multiprocessing import Process
    import sys
    
    rocket = 0
    
    def func1():
        global rocket
        print 'start func1'
        while rocket < sys.maxint:
            rocket += 1
        print 'end func1'
    
    def func2():
        global rocket
        print 'start func2'
        while rocket < sys.maxint:
            rocket += 1
        print 'end func2'
    
    if __name__=='__main__':
        p1 = Process(target = func1)
        p1.start()
        p2 = Process(target = func2)
        p2.start()
        # This is where I had to add the join() function.
        p1.join()
        p2.join()
    

    Furthermore, Check this thread out: When to call .join() on a process?

    0 讨论(0)
  • 2020-11-27 04:39

    You are doing it correctly. :)

    Try running this silly piece of code:

    from multiprocessing import Process
    import sys
    
    rocket = 0
    
    def func1():
        global rocket
        print 'start func1'
        while rocket < sys.maxint:
            rocket += 1
        print 'end func1'
    
    def func2():
        global rocket
        print 'start func2'
        while rocket < sys.maxint:
            rocket += 1
        print 'end func2'
    
    if __name__=='__main__':
        p1 = Process(target = func1)
        p1.start()
        p2 = Process(target = func2)
        p2.start()
    

    You will see it print 'start func1' and then 'start func2' and then after a (very) long time you will finally see the functions end. But they will indeed execute simultaneously.

    Because processes take a while to start up, you may even see 'start func2' before 'start func1'.

    0 讨论(0)
  • 2020-11-27 04:43

    Here is another version, if a dynamic list of processes need to be run. I am including the two shell scripts, if you want to try it:

    t1.sh

    for i in {1..10}
      do 
         echo "1... t.sh i:"$i
         sleep 1
      done
    

    t2.sh

       for i in {1..3}
       do
           echo "2.. t2.sh i:"$i
           sleep 1
       done
    

    np.py

    import os
    from multiprocessing import Process, Lock
    
    def f(l, cmd):
        os.system(cmd)
    
    if __name__ == '__main__':
        lock = Lock()
    
        for cmd in ['sh t1.sh', 'sh t2.sh']:
            Process(target=f, args=(lock, cmd)).start()
    

    output

    1... t.sh i:1
    2.. t2.sh i:1
    1... t.sh i:2
    2.. t2.sh i:2
    1... t.sh i:3
    2.. t2.sh i:3
    1... t.sh i:4
    1... t.sh i:5
    1... t.sh i:6
    1... t.sh i:7
    1... t.sh i:8
    1... t.sh i:9
    1... t.sh i:10
    

    "lock" left there can be acquired before task "l.acquire()" and released after "l.release()"

    0 讨论(0)
  • 2020-11-27 04:44

    This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.

    To parallelize your example, you'd need to define your functions with the @ray.remote decorator, and then invoke them with .remote.

    import ray
    
    ray.init()
    
    # Define functions you want to execute in parallel using 
    # the ray.remote decorator.
    @ray.remote
    def func1():
        #does something
    
    @ray.remote
    def func2():
        #does something
    
    # Execute func1 and func2 in parallel.
    ray.get([func1.remote(), func2.remote()])
    

    If func1() and func2() return results, you need to rewrite the code as follows:

    ret_id1 = func1.remote()
    ret_id2 = func1.remote()
    ret1, ret2 = ray.get([ret_id1, ret_id2])
    

    There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.

    0 讨论(0)
提交回复
热议问题