gil

1023课堂小结

人走茶凉 提交于 2019-12-02 02:56:57
目录 GIL全局解释锁 python解释器 GIL全局解释锁 多线程的作用 死锁现象 递归锁 信号量 线程队列 线程队列 先进先出 后进先出 优先级队列 GIL全局解释锁 python解释器 Cpython(C语言编写) Jpython(Java编写) Ppython(Python编写) GIL全局解释锁 基于CPython来研究全局解释器锁,因为CPython的内存线程不是安全的 GIL本质上是一个互斥锁 GIL是为了阻止同一个进程内多个线程同时执行(并行) GIL的存在就是为了保证线程安全 注意:多个线程过来执行,一旦遇到IO操作,就会立马释放GIL解释器锁,交给下一个先进来的线程 多线程的作用 计算密集型程序 在单核情况下,若一个任务需要10s 开启进程,消耗资源大,执行4个进程需要40s 开启线程,消耗资源小,执行4个线程需要40s 在多核情况下,若一个任务需要10s 开启进程,并行执行,效率较高,执行4个进程需要10s 开启线程,并发执行,效率较低,执行4个线程需要40s IO密集型程序 在单核情况下,若一个任务需要10s 开启进程,消耗资源大,执行4个进程需要40s 开启线程,消耗资源小,执行4个线程需要40s 在多核情况下,若一个任务需要10s 开启进程,并行执行,效率小于线程,因为遇到IO会立即切换CPU执行权限,执行4个进程需要10s = 开启进程的额外时间

python Global Interpreter Lock GIL problem

别来无恙 提交于 2019-12-01 23:30:21
I want to provide a service on the web that people can test out the performance of an algo, which is written in python and running on the linux machine basically what I want to do is that, there is a very trivial PHP handler, let's say start_algo.php, which accepts the request coming from browser, and in the php code through system() or popen() (something like exec( "python algo.py" ) ) to issue a new process running the python script, I think it is doable in this part problem is that since it is a web service, surely it has to serve multiple users at the same time, but I am quite confused by

Python GIL: is django save() blocking?

一曲冷凌霜 提交于 2019-12-01 20:35:29
问题 My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (*thread_A*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (*thread_B*) using collections.deque and have *thread_B* save them sequentially. Yet I'm unsure regarding this scheme. save() returns the id of the new database entry, so it "ends" only after the database responds,

Python GIL: is django save() blocking?

淺唱寂寞╮ 提交于 2019-12-01 18:30:03
My django app saves django models to a remote database. Sometimes the saves are bursty. In order to free the main thread (*thread_A*) of the application from the time toll of saving multiple objects to the database, I thought of transferring the model objects to a separate thread (*thread_B*) using collections.deque and have *thread_B* save them sequentially. Yet I'm unsure regarding this scheme. save() returns the id of the new database entry, so it "ends" only after the database responds, which is at the end of the transaction. Does django.db.models.Model.save() really block GIL -wise and

Python and truly concurrent threads

主宰稳场 提交于 2019-12-01 17:54:27
问题 I've been reading for hours now and I can completely figure out how python multi threading is faster than a single thread. The question really stems from GIL. If there is GIL, and only one thread is really running at any single time, how can multi threading be faster than a single thread? I read that with some operations GIL is released (like writing to file). Is that what makes multi threading faster? And about greenlets. How do those help with concurrency at all? So far all the purpose I

PyQt, QThread, GIL, GUI

五迷三道 提交于 2019-12-01 08:51:14
I have GUI and program logic written in Python. I am requesting information from web by calling urllib.requests (and so on) very often and this cause a problem when GUI is unresponsive but this calls are wrapped with QThread . I think that happens because of GIL . But how when I can use QThread in PyQt application, what use of it in PyQt if I can't make code to work asynchronously? --The code-- qtthreaddecorator.py: from PyQt4 import QtCore class Worker(QtCore.QThread): def __init__(self, thread_name, finished_slot, function, *args, **kwargs): QtCore.QThread.__init__(self) self._thread_name =

【python】-- GIL锁、线程锁(互斥锁)、递归锁(RLock)

二次信任 提交于 2019-12-01 08:31:37
GIL锁 计算机有4核,代表着同一时间,可以干4个任务。如果单核cpu的话,我启动10个线程,我看上去也是并发的,因为是执行了上下文的切换,让看上去是并发的。但是单核永远肯定时串行的,它肯定是串行的,cpu真正执行的时候,因为一会执行1,一会执行2.。。。。正常的线程就是这个样子的。但是,在python中,无论有多少核,永远都是假象。无论是4核,8核,还是16核.......不好意思,同一时间执行的线程只有一个(线程),它就是这个样子的。这个是python的一个开发时候,设计的一个缺陷,所以说python中的线程是假线程。 1、 全局解释器锁(GIL)  无论你启多少个线程,你有多少个cpu, Python在执行的时候会淡定的在同一时刻只允许一个线程运行 2、GIL存在的意义?  因为python的线程是调用操作系统的原生线程,这个原生线程就是C语言写的原生线程。因为python是用C写的,启动的时候就是调用的C语言的接口。因为启动的C语言的远程线程,那它要调这个线程去执行任务就必须知道上下文,所以python要去调C语言的接口的线程,必须要把这个上限问关系传给python,那就变成了一个我在加减的时候要让程序串行才能一次计算。就是先让线程1,再让线程2.......  每个线程在执行的过程中,python解释器是控制不了的,因为是调的C语言的接口,超出了python的控制范围

PyQt, QThread, GIL, GUI

心不动则不痛 提交于 2019-12-01 07:39:39
问题 I have GUI and program logic written in Python. I am requesting information from web by calling urllib.requests (and so on) very often and this cause a problem when GUI is unresponsive but this calls are wrapped with QThread . I think that happens because of GIL . But how when I can use QThread in PyQt application, what use of it in PyQt if I can't make code to work asynchronously? --The code-- qtthreaddecorator.py: from PyQt4 import QtCore class Worker(QtCore.QThread): def __init__(self,

2-6 GIL全局解释器锁

江枫思渺然 提交于 2019-12-01 06:46:48
一 引子 定义: In CPython, the global interpreter lock, or GIL, is a mutex that prevents multiple native threads from executing Python bytecodes at once. This lock is necessary mainly because CPython’s memory management is not thread-safe. (However, since the GIL exists, other features have grown to depend on the guarantees that it enforces.) 结论:在Cpython解释器中,同一个进程下开启的多线程,同一时刻只能有一个线程执行,无法利用多核优势 首先需要明确的一点是GIL并不是Python的特性,它是在实现Python解析器(CPython)时所引入的一个概念。就好比C++是一套语言(语法)标准,但是可以用不同的编译器来编译成可执行代码。>有名的编译器例如GCC,INTEL C++,Visual C++等。Python也一样,同样一段代码可以通过CPython,PyPy,Psyco等不同的Python执行环境来执行。像其中的JPython就没有GIL

Where can I find a list of numpy functions which release the GIL?

喜你入骨 提交于 2019-12-01 03:52:33
I have found several SO questions asking about this in one way or another, but none of them actually either give a list or refer to one. This question refers to a wiki page , but while the wiki page talks about the GIL and multi-threading, it doesn't give a list of GIL releasing functions. This mailing list post indicates that the only way to find out is to read the numpy source. Really? It's not guaranteed to catch everything, but I just ran: git grep nogil in my clone of the numpy repository. It turns up 82 usages in 2 files: random/mtrand/mtrand.pyx random/mtrand/numpy.pxd 来源: https:/