multiprocessing

multiprocessing.Process.is_alive() returns True although process has finished, why?

扶醉桌前 提交于 2020-01-29 03:37:20
问题 I use multiprocess.Process to create a child process and then call os.wait4 until child exists. When the actual child process finishes, multiprocess.Process.is_alive() still returns True . That's contradicting. Why? Code: from multiprocessing import Process import os, sys proc = Process(target=os.system, args= ("sleep 2", )) proc.start() print "is_alive()", proc.is_alive() ret = os.wait4(proc.pid, 0) procPid, procStatus, procRes = ret print "wait4 = ", ret ## Puzzled! print "----Puzzled below

freeze_support bug in using scikit-learn in the Anaconda python distro?

99封情书 提交于 2020-01-25 18:18:40
问题 I just want to be sure this is not about my code but it needs to be fixed in the relevant Python package. (By the way, does this look like something I can manually patch even before the vendor ships an update?) I was using scikit-learn-0.15b1 which called these. Thanks! Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda\lib\multiprocessing\forking.py", line 380, in main prepare(preparation_data) File "C:\Anaconda\lib\multiprocessing\forking.py", line 495

freeze_support bug in using scikit-learn in the Anaconda python distro?

倾然丶 夕夏残阳落幕 提交于 2020-01-25 18:18:26
问题 I just want to be sure this is not about my code but it needs to be fixed in the relevant Python package. (By the way, does this look like something I can manually patch even before the vendor ships an update?) I was using scikit-learn-0.15b1 which called these. Thanks! Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda\lib\multiprocessing\forking.py", line 380, in main prepare(preparation_data) File "C:\Anaconda\lib\multiprocessing\forking.py", line 495

freeze_support bug in using scikit-learn in the Anaconda python distro?

﹥>﹥吖頭↗ 提交于 2020-01-25 18:18:08
问题 I just want to be sure this is not about my code but it needs to be fixed in the relevant Python package. (By the way, does this look like something I can manually patch even before the vendor ships an update?) I was using scikit-learn-0.15b1 which called these. Thanks! Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Anaconda\lib\multiprocessing\forking.py", line 380, in main prepare(preparation_data) File "C:\Anaconda\lib\multiprocessing\forking.py", line 495

Why doesn't multiprocessing pool map speed up compared to serial map?

断了今生、忘了曾经 提交于 2020-01-25 07:47:20
问题 I have this very simple python code that I want to speed up by parallelizing it. However no matter what I seem to do, multiprocessing.Pool.map doesn't gain anything over the standard map. I've read other threads where people use this with very small functions that don't parallelize well and lead to excessive overhead but I would think that shouldn't be the case here. Am I doing something wrong? Here's the example #!/usr/bin/python import numpy, time def AddNoise(sample): #time.sleep(0.001)

How to have a PyQt window call a method when its “X” close button is selected

一曲冷凌霜 提交于 2020-01-25 03:00:19
问题 I'm trying to have a method called when the "X" close button of a PyQt window is selected. Briskly, I have a class of the object QtGui.QWidget and I want it to call one of its methods when the window is closed using the "X" close button in order to wrap up some subprocesses. How could this be done? The code is shown below. The method of the class interface that I want to have called is stylusProximityControlOff() . This method terminates a subprocess which is potentially a little messy, but

Python multiprocess non-blocking intercommunication using Pipes

旧城冷巷雨未停 提交于 2020-01-24 12:48:20
问题 Is it possible to receive process intercommunications using Pipes in a non-blocking fashion? Consider the following code: from multiprocessing import Process, Pipe import time def f(conn): time.sleep(3) conn.send('Done') conn.close() if __name__ == '__main__': parent_conn, child_conn = Pipe() p = Process(target=f, args=(child_conn,)) p.start() while True: print('Test') msg = parent_conn.recv() if msg == 'Done': break print('The End') p.join() The parent_conn.recv() will block the while-loop

Catching unpickleable exceptions and re-raising

前提是你 提交于 2020-01-24 09:44:25
问题 This is a followup to my question Hang in Python script using SQLAlchemy and multiprocessing. As discussed in that question, pickling exceptions is problematic in Python. This is usually not a issue, but one case when it is, is when errors occur in the python multiprocessing module. Since multiprocessing moves objects around by pickling, if an error occurs inside a multiprocessing process, the entire process may hang, as demonstrated in that question. One possible approach is to fix all the

Catching unpickleable exceptions and re-raising

╄→гoц情女王★ 提交于 2020-01-24 09:44:25
问题 This is a followup to my question Hang in Python script using SQLAlchemy and multiprocessing. As discussed in that question, pickling exceptions is problematic in Python. This is usually not a issue, but one case when it is, is when errors occur in the python multiprocessing module. Since multiprocessing moves objects around by pickling, if an error occurs inside a multiprocessing process, the entire process may hang, as demonstrated in that question. One possible approach is to fix all the

Sharing a postgres connection pool between python multiproccess

亡梦爱人 提交于 2020-01-24 07:29:45
问题 I am trying to use psycopg2's connection pool with python's multiprocess library. Currently, attempting to share the connection pool amongst threads in the manner described above causes: psycopg2.OperationalError: SSL error: decryption failed or bad record mac The following code should reproduce the error, which the caveat that the reader has to set up a simple postgres database. from multiprocessing import Pool from psycopg2 import pool import psycopg2 import psycopg2.extras connection_pool