multithreading

Is the predict_proba method of scikit learn's SGDClassifier thread safe?

☆樱花仙子☆ 提交于 2021-01-28 19:40:51
问题 I would like to expose a model built using sklearn.linear_model.SGDClassifier through a web API. Every web request would call into the predict_proba method of the model, however I will have just one instance of the model in the process, due to performance and consistency reasons; it would get created when the web application starts and start serving requests once the training completes. This raises the question - is the predict_proba method of the model actually thread safe? Any help will be

Thread-safety when accessing data from N-theads in context of an async TCP-server

社会主义新天地 提交于 2021-01-28 19:39:06
问题 As the title says i have a question concerning the following scenario (simplyfied example): Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread). class Generator { void generateData(); uint8_t dataChunk[999]; } Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread). The acceptor starts a new thread for each new client-connection, in which an

Thread-safety when accessing data from N-theads in context of an async TCP-server

南楼画角 提交于 2021-01-28 19:21:46
问题 As the title says i have a question concerning the following scenario (simplyfied example): Assume that i have an object of the Generator-Class below, which continuously updates its dataChunk member ( running in the main thread). class Generator { void generateData(); uint8_t dataChunk[999]; } Furthermore i have an async. acceptor of TCP-connections to which 1-N clients can connect to (running in a second thread). The acceptor starts a new thread for each new client-connection, in which an

Implementing a Starve method (“Unrelease”/“Hold”) for SemaphoreSlim

瘦欲@ 提交于 2021-01-28 18:41:49
问题 I'm using a SemaphoreSlim with a FIFO behaviour and now I want to add to it a Starve(int amount) method to remove threads from the pool, sort of the opposite to Release() . If there are any running tasks, they will of course continue until they are done, since for the moment the semaphore is not keeping track of what is actually running and "owes" the semaphore a release call. The reason is that the user will dynamically control how many processes are allowed at any time for a given semaphore

Python ThreadPoolExecutor terminate all threads

假装没事ソ 提交于 2021-01-28 18:16:49
问题 I am running a piece of python code in which multiple threads are run through threadpool executor. Each thread is supposed to perform a task (fetch a webpage for example). What I want to be able to do is to terminate all threads, even if one of the threads fail. For instance: with ThreadPoolExecutor(self._num_threads) as executor: jobs = [] for path in paths: kw = {"path": path} jobs.append(executor.submit(start,**kw)) for job in futures.as_completed(jobs): result = job.result() print(result)

Reading higher frequency data in thread and plotting graph in real-time with Tkinter

◇◆丶佛笑我妖孽 提交于 2021-01-28 12:25:19
问题 In the last couple of weeks, I've been trying to make an application that can read EEG data from OpenBCI Cyton (@250Hz) and plot a graph in 'real-time'. What seems to work better here are threads. I applied the tips I found here 1 to communicate the thread with Tkinter, but the application still doesn't work (gives me the error RecursionError: maximum recursion depth exceeded while calling a Python object ). Maybe I'm doing something wrong because I'm trying to use multiple .py files? See

How do I prevent window lock when binding large dataset to grid in C#?

点点圈 提交于 2021-01-28 12:23:13
问题 I've got a window that will be filled with a grid. On a background thread I asynchronously retrieve data tables from several different servers. These tables I then need to display in the grid. I have a progress bar from the background thread displaying while I'm establishing connections and pulling the data, but when the grid is being filled the UI thread is (understandably) blocked. Therefore, the progress bar stalls and it looks like the window is frozen. Filling the grid can take anywhere

How can I catch when a thread dies in ThreadPoolExecutor()?

…衆ロ難τιáo~ 提交于 2021-01-28 12:20:25
问题 I have some very simple python code that runs a bunch of inputs through various processes via ThreadPoolExecutor(). Now, sometimes one or more of the threads dies quietly. It is actually great that the rest of the threads continue on and the code completes, but I would like to put together some type of summary that tells me which, if any, of the threads have died. I've found several examples where folks want the whole thing to shut down, but haven't seen anything yet where the process

Reading higher frequency data in thread and plotting graph in real-time with Tkinter

最后都变了- 提交于 2021-01-28 12:17:24
问题 In the last couple of weeks, I've been trying to make an application that can read EEG data from OpenBCI Cyton (@250Hz) and plot a graph in 'real-time'. What seems to work better here are threads. I applied the tips I found here 1 to communicate the thread with Tkinter, but the application still doesn't work (gives me the error RecursionError: maximum recursion depth exceeded while calling a Python object ). Maybe I'm doing something wrong because I'm trying to use multiple .py files? See

Make prediction with Keras model using multiple CPUs

℡╲_俬逩灬. 提交于 2021-01-28 12:04:00
问题 I am trying to make predictions with a Keras model (using Tensorflow 2.0) using multiple CPUs. I have tried this: tf.config.threading.set_intra_op_parallelism_threads(4) tf.config.threading.set_inter_op_parallelism_threads(4) While not getting an error, I am not sure if this is the right approach. Can predictions be multithreaded? Many thanks 来源: https://stackoverflow.com/questions/58974483/make-prediction-with-keras-model-using-multiple-cpus