Write data to disk in Python as a background process

前端 未结 3 894
礼貌的吻别
礼貌的吻别 2021-02-04 17:59

I have a program in Python that basically does the following:

for j in xrange(200):
    # 1) Compute a bunch of data
    # 2) Write data to disk
<
相关标签:
3条回答
  • 2021-02-04 18:22

    You can use something such as Queue.Queue (the module is here: Queue) and threading.Thread (or threading.start_new_thread if you just want a function), the module is here: threading - As a file write is not CPU intensive and use more IO. (and the GIL doesn't affect it).

    0 讨论(0)
  • 2021-02-04 18:26

    Simple way would be to use just threading and the queue. On the other hand, if the computing part does not depend on global state, and you have machine with multiple CPU cores, more efficient way would be to use process pool

    from multiprocessing import Pool
    
    def compute_data(x):
        return some_calculation_with(x)
    
    if __name__ == '__main__':
        pool = Pool(processes=4) # let's say you have quad-core, so start 4 workers
    
        with open("output_file","w") as outfile:
            for calculation_result in pool.imap(compute_data, range(200)):
            # pool.imap returns results as they come from process pool    
                outfile.write(calculation_result)  
    
    0 讨论(0)
  • 2021-02-04 18:34

    You could try using multiple processes like this:

    import multiprocessing as mp
    
    def compute(j):
        # compute a bunch of data
        return data
    
    def write(data):
        # write data to disk
    
    if __name__ == '__main__':
        pool = mp.Pool()
        for j in xrange(200):
            pool.apply_async(compute, args=(j, ), callback=write)
        pool.close()
        pool.join()
    

    pool = mp.Pool() will create a pool of worker processes. By default, the number of workers equals the number of CPU cores your machine has.

    Each pool.apply_async call queues a task to be run by a worker in the pool of worker processes. When a worker is available, it runs compute(j). When the worker returns a value, data, a thread in the main process runs the callback function write(data), with data being the data returned by the worker.

    Some caveats:

    • The data has to be picklable, since it is being communicated from the worker process back to the main process via a Queue.
    • There is no guarantee that the order in which the workers complete tasks is the same as the order in which the tasks were sent to the pool. So the order in which the data is written to disk may not correspond to j ranging from 0 to 199. One way around this problem would be to write the data to a sqlite (or other kind of) database with j as one of the fields of data. Then, when you wish to read the data in order, you could SELECT * FROM table ORDER BY j.
    • Using multiple processes will increase the amount of memory required as data is generated by the worker processes and data waiting to be written to disk accumulates in the Queue. You might be able to reduce the amount of memory required by using NumPy arrays. If that is not possible, then you might have to reduce the number of processes:

      pool = mp.Pool(processes=1) 
      

      That will create one worker process (to run compute), leaving the main process to run write. Since compute takes longer than write, the Queue won't get backed up with more than one chunk of data to be written to disk. However, you would still need enough memory to compute on one chunk of data while writing a different chunk of data to disk.

      If you do not have enough memory to do both simultaneously, then you have no choice -- your original code, which runs compute and write sequentially, is the only way.

    0 讨论(0)
提交回复
热议问题