gil

multi-threading in python: is it really performance effiicient most of the time?

无人久伴 提交于 2019-12-06 09:09:56
In my little understanding, it is the performance factor that drives programming for multi-threading in most cases but not all. (irrespective of Java or Python). I was reading this enlightening article on GIL in SO. The article summarizes that python adopts GIL mechanism; i.e only a single Thread can execute python byte code at any given time. This makes single thread application really faster. My question is as follows: Since if only one Thread is served at a given point, does multiprocessing or thread module provides a way to overcome this limitation imposed by GIL? If not, what features

同步锁与GIL的关系

孤街浪徒 提交于 2019-12-06 08:40:31
#_author:来童星#date:2019/12/2# Python的线程在GIL的控制之下,线程之间,对整个python解释器,对python提供的CAPI的访问都是互斥的,# 这可以看作是Python内核级的互斥机制。但是这种互斥是我们不能控制的,我们还需要另外一种可控的互斥机制———用户级互斥。# 内核级通过互斥保护了内核的共享资源,同样,用户级互斥保护了用户程序中的共享资源。# GIL的作用是:# 对于一个解释器,只能有一个thread在执行bytecode。所以每时每刻只有一条bytecode在被执行一个thread。# GIL保证了bytecode这层面上是thread safe的。# 但是如果你有个操作比如x += 1,这个操作需要多个bytecodes操作,在执行这个操作的多条bytecodes期间的时候可能中途就换thread了,# 这样就出现了data races的情况了。 来源: https://www.cnblogs.com/startl/p/11972931.html

python关于多线程的GIL问题,以及CPU分配核数的问题

感情迁移 提交于 2019-12-05 20:41:12
对于Python中,多线程的问题详细描述: 在Python中,其实对于多线程的运行方案并不完美,纯属的Python多线程运行时,只能实现并发执行,对于现在的多核CPU来说,有点浪费CPU资源,但在其他的语言中,并没有这个问题。 这一切都是由于时代的原因,在上个世纪80年代,由于硬件的发展,当时的电脑只是单核CPU,并没有今天的多核CPU。发明Python语言的龟叔,为了实现单核多任务的操作,提出了一个相当精彩的概念,就是GIL。 GIL解决了单核CPU多任务的问题,但随着硬件的发展,同样也暴露出了GIL的缺点,即不能实现在多核CPU的今天,多线程的多任务并行执行。 为了探讨GIL的实现,先简单简绍一下多任务的概念: 对于多任务,无论是多进程或者多线程,在具体的CPU执行阶段是按线程去分配CPU执行的,因为在CPU执行阶段会将进程分为线程执行 , 每一个进程可以细分为一个主线程和n个子线程,在真正执行的时候,是由线程去争抢CPU的时间片, 即CPU分配资源和调度的基本单位是线程。通过时间片轮流机制实现多任务的执行效果。具体的时间片轮流机制可以查阅CPU的制造商的相关网站,每家的厂商对于时间片轮流机制的算法不尽相同,但都实现了同一个功能,即让CPU不停的切换多个线程,并让每个线程执行一个或者多个时间片(即是该线程竞争到的时间片,如果在该线程执行时遇到I/O操作,CPU会释放该线程资源

【转】python 线程,GIL 和 ctypes

…衆ロ難τιáo~ 提交于 2019-12-05 20:38:25
原文地址: http://zhuoqiang.me/python-thread-gil-and-ctypes.html GIL 与 Python 线程的纠葛 GIL 是什么东西?它对我们的 python 程序会产生什么样的影响?我们先来看一个问题。运行下面这段 python 程序,CPU 占用率是多少? # 请勿在工作中模仿,危险:) def dead_loop(): while True: pass dead_loop() 答案是什么呢,占用 100% CPU?那是单核!还得是没有超线程的古董 CPU。在我的双核 CPU 上,这个死循环只会吃掉我一个核的工作负荷,也就是只占用 50% CPU。那如何能让它在双核机器上占用 100% 的 CPU 呢?答案很容易想到,用两个线程就行了,线程不正是并发分享 CPU 运算资源的吗。可惜答案虽然对了,但做起来可没那么简单。下面的程序在主线程之外又起了一个死循环的线程 import threading def dead_loop(): while True: pass # 新起一个死循环线程 t = threading.Thread(target= dead_loop) t.start() # 主线程也进入死循环 dead_loop() t.join() 按道理它应该能做到占用两个核的 CPU 资源,可是实际运行情况却是没有什么改变

Profiling the GIL

有些话、适合烂在心里 提交于 2019-12-05 19:10:33
Is there a way to profile a python process' usage of the GIL ? Basically, I want to find out what percentage of the time the GIL is held . The process is single-threaded. My motivation is that I have some code written in Cython, which uses nogil . Ideally, I would like to run it in a multi-threaded process, but in order to know if that can potentially be a good idea, I need to know if the GIL is free a significant amount of the time. I found this related question , from 8 years ago. The sole answer there is "No". Hopefully, things have changed since then. Completely by accident, I found a tool

GIL

我怕爱的太早我们不能终老 提交于 2019-12-05 06:19:26
Cpython解释器本身就不是线程安全的,因此有全局解释器锁(GIL),一次只允许使用一个线程执行Python字节码。因此,一个python进程通常不能同时使用多个CPU核心。 Python标准库中的所有阻塞性I/O函数都会释放GIL,允许其他线程运行,time.sleep()都会释放GIL。因此,尽管有GIL,Python线程,还是能在I/O密集型应用中发挥作用。 如果场景为CPU密集型场景,可以尝试PyPy。 来源: https://www.cnblogs.com/liuer-mihou/p/11909835.html

How to avoid gcc warning in Python C extension when using Py_BEGIN_ALLOW_THREADS

若如初见. 提交于 2019-12-05 05:30:15
The simplest way to manipulate the GIL in Python C extensions is to use the macros provided: my_awesome_C_function() { blah; Py_BEGIN_ALLOW_THREADS // do stuff that doesn't need the GIL if (should_i_call_back) { Py_BLOCK_THREADS // do stuff that needs the GIL Py_UNBLOCK_THREADS } Py_END_ALLOW_THREADS return blah blah; } This works great, letting me release the GIL for the bulk of my code, but re-grabbing it for small bits of code that need it. The problem is when I compile this with gcc, I get: ext/engine.c:548: warning: '_save' might be used uninitialized in this function because Py_BEGIN

A thread-safe memoize decorator

末鹿安然 提交于 2019-12-05 01:30:10
问题 I'm trying to make a memoize decorator that works with multiple threads. I understood that I need to use the cache as a shared object between the threads, and acquire/lock the shared object. I'm of course launching the threads: for i in range(5): thread = threading.Thread(target=self.worker, args=(self.call_queue,)) thread.daemon = True thread.start() where worker is: def worker(self, call): func, args, kwargs = call.get() self.returns.put(func(*args, **kwargs)) call.task_done() The problem

Does using the subprocess module release the python GIL?

末鹿安然 提交于 2019-12-05 01:19:15
When calling a linux binary which takes a relatively long time through Python's subprocess module, does this release the GIL? I want to parallelise some code which calls a binary program from the command line. Is it better to use threads (through threading and a multiprocessing.pool.ThreadPool ) or multiprocessing ? My assumption is that if subprocess releases the GIL then choosing the threading option is better. When calling a linux binary which takes a relatively long time through Python's subprocess module, does this release the GIL? Yes, it releases the Global Interpreter Lock (GIL) in the

Releasing Python GIL while in C++ code

三世轮回 提交于 2019-12-04 21:27:23
问题 I've got a library written in C++ which I wrap using SWIG and use in python. Generally there is one class with few methods. The problem is that calling these methods may be time consuming - they may hang my application (GIL is not released when calling these methods). So my question is: What is the simplest way to release GIL for these method calls? (I understand that if I used a C library I could wrap this with some additional C code, but here I use C++ and classes) 回答1: The real problem is