Deadlock with logging multiprocess/multithread python script

后端 未结 2 1865
佛祖请我去吃肉
佛祖请我去吃肉 2020-12-16 05:42

I am facing the problem with collecting logs from the following script. Once I set up the SLEEP_TIME to too \"small\" value, the LoggingThread threads somehow b

相关标签:
2条回答
  • 2020-12-16 06:39

    I've run into similar issue just recently while using logging module together with Pathos multiprocessing library. Still not 100% sure, but it seems, that in my case the problem may have been caused by the fact, that logging handler was trying to reuse a lock object from within different processes.

    Was able to fix it with a simple wrapper around default logging Handler:

    import threading
    from collections import defaultdict
    from multiprocessing import current_process
    
    import colorlog
    
    
    class ProcessSafeHandler(colorlog.StreamHandler):
        def __init__(self):
            super().__init__()
    
            self._locks = defaultdict(lambda: threading.RLock())
    
        def acquire(self):
            current_process_id = current_process().pid
            self._locks[current_process_id].acquire()
    
        def release(self):
            current_process_id = current_process().pid
            self._locks[current_process_id].release()
    
    0 讨论(0)
  • 2020-12-16 06:41

    This is probably bug 6721.

    The problem is common in any situation where you have locks, threads and forks. If thread 1 had a lock while thread 2 calls fork, in the forked process, there will only be thread 2 and the lock will be held forever. In your case, that is logging.StreamHandler.lock.

    A fix can be found here (permalink) for the logging module. Note that you need to take care of any other locks, too.

    0 讨论(0)
提交回复
热议问题