I\'ve been reading about asyncio module in python 3, and more broadly about coroutines in python, and I can\'t get what makes asyncio such a great tool. I have the feeling that
Not a proper answer, but a list of hints that could not fit into a comment:
You are mentioning the multiprocessing
module (and let's consider threading
too). Suppose you have to handle hundreds of sockets: can you spawn hundreds of processes or threads?
Again, with threads and processes: how do you handle concurrent access to shared resources? What is the overhead of mechanisms like locking?
Frameworks like Celery also add an important overhead. Can you use it e.g. for handling every single request on a high-traffic web server? By the way, in that scenario, who is responsible for handling sockets and connections (Celery for its nature can't do that for you)?
Be sure to read the rationale behind asyncio. That rationale (among other things) mentions a system call: writev()
-- isn't that much more efficient than multiple write()
s?