multiprocessing

Why does communicate deadlock when used with multiple Popen subprocesses?

我的未来我决定 提交于 2021-02-08 12:53:08
问题 The following issue does not occur in Python 2.7.3. However, it occurs with both Python 2.7.1 and Python 2.6 on my machine (64-bit Mac OSX 10.7.3). This is code I will eventually distribute, so I would like to know if there is any way to complete this task that does not depend so dramatically on the Python version. I need to open multiple subprocesses in parallel and write STDIN data to each of them. Normally I would do this using the Popen.communicate method. However, communicate is

Multiprocessing and niceness value

心不动则不痛 提交于 2021-02-08 12:22:22
问题 Does anyone know of an easy way to set the niceness value of a Process or Pool when it is created in multiprocessing? 回答1: os.nice(increment) Add increment to the process’s “niceness”. Return the new niceness. Availability: Unix. From http://docs.python.org/library/os.html#os.nice. Is there a reason you can't call this in the child process? 回答2: Try importing the ctypes module and looking for pthread_schedparam() or SetThreadPriority() (Linux / Windows). 来源: https://stackoverflow.com

Gzip issue with multiprocessing pool

白昼怎懂夜的黑 提交于 2021-02-08 08:48:22
问题 I have a gzip file handle that I'm writing to from a multiprocessing pool. Unfortunately, the output file seems to become corrupted after a certain point, so doing something like zcat out | wc gives: gzip: out: invalid compressed data--format violated I'm dealing with the problem by not using gzip. But I'm curious as to why this is happening and if there is any solution. Not sure if it matters, but I'm running the code on a remote linux machine that I don't control but my guess is that it's

Python multiprocessing.Pool.map dying silently

半腔热情 提交于 2021-02-08 08:42:31
问题 I have tried to put a for loop in parallel to speed up some code. consider this: from multiprocessing import Pool results = [] def do_stuff(str): print str results.append(str) p = Pool(4) p.map(do_stuff, ['str1','str2','str3',...]) # many strings here ~ 2000 p.close() print results I have some debug messages showing from do_stuff to keep track of how far the program gets before dying. It seems to die at different points each time through. For example it will print 'str297' and then it will

Using sets with the multiprocessing module

限于喜欢 提交于 2021-02-08 05:16:28
问题 I can't seem to share a set across processes using a Manager instance. A condensed version of my code: from multiprocessing.managers import SyncManager manager = SyncManager() manager.start() manager.register(Set) I've also tried register(type(Set)) and register(Set()) , but I'm not overly surprised that neither of them worked (the first should evaluate to Class, I think). The exception I get in all cases is TypeError: __name__ must be set to a string object in line 675 of managers.py. Is

Using sets with the multiprocessing module

喜夏-厌秋 提交于 2021-02-08 05:15:16
问题 I can't seem to share a set across processes using a Manager instance. A condensed version of my code: from multiprocessing.managers import SyncManager manager = SyncManager() manager.start() manager.register(Set) I've also tried register(type(Set)) and register(Set()) , but I'm not overly surprised that neither of them worked (the first should evaluate to Class, I think). The exception I get in all cases is TypeError: __name__ must be set to a string object in line 675 of managers.py. Is

Can multiple OS processes run in parallel on multicore CPU?

孤人 提交于 2021-02-08 05:11:00
问题 So I got into a debate whether multicore CPU allows parallel execution of separate processes. As far as I understand, each core allows executing different threads but they all have to belong to one process. Or am I wrong? My reasoning is that, while each core has separate set of registers and L1/L2 cache (depending on hardware), they all have to share other stuff like L3 cache or TLB (I don't have a lot of knowledge about cpu architecture, so feel free to correct me). I tried searching for an

Can multiple OS processes run in parallel on multicore CPU?

微笑、不失礼 提交于 2021-02-08 05:10:51
问题 So I got into a debate whether multicore CPU allows parallel execution of separate processes. As far as I understand, each core allows executing different threads but they all have to belong to one process. Or am I wrong? My reasoning is that, while each core has separate set of registers and L1/L2 cache (depending on hardware), they all have to share other stuff like L3 cache or TLB (I don't have a lot of knowledge about cpu architecture, so feel free to correct me). I tried searching for an

prevent __del__ from being called in multiprocessing

隐身守侯 提交于 2021-02-07 19:54:12
问题 When playing around with multiprocessing I noticed that in the following script, __del__ is called twice (once in the child processes and once in the parent). class myclass(object): def __init__(self,val): self.val=val print ("Initializing %s"%str(self.val)) def __del__(self): print ("deleting %s"%str(self.val)) if __name__ == "__main__": import multiprocessing p=multiprocessing.Pool(4) obj_list=p.map(myclass,range(30)) raw_input() For this script, it doesn't matter ... but what if __del__

prevent __del__ from being called in multiprocessing

安稳与你 提交于 2021-02-07 19:53:10
问题 When playing around with multiprocessing I noticed that in the following script, __del__ is called twice (once in the child processes and once in the parent). class myclass(object): def __init__(self,val): self.val=val print ("Initializing %s"%str(self.val)) def __del__(self): print ("deleting %s"%str(self.val)) if __name__ == "__main__": import multiprocessing p=multiprocessing.Pool(4) obj_list=p.map(myclass,range(30)) raw_input() For this script, it doesn't matter ... but what if __del__