Can the thread per request model be faster than non-blocking I/O?

后端 未结 4 2155
你的背包
你的背包 2021-02-09 00:34

I remember 2 or 3 years ago reading a couple articles where people claimed that modern threading libraries were getting so good that thread-per-request servers would not only be

4条回答
  •  不思量自难忘°
    2021-02-09 00:58

    Not in my opinion. If both models are well implemented (this is a BIG requirement) I think that the concept of NIO should prevail.

    At the heart of a computer are cores. No matter what you do, you cannot parallelize your application more than you have cores. i.e. If you have a 4 core machine, you can ONLY do 4 things at a time (I'm glossing over some details here, but that suffices for this argument).

    Expanding on that thought, if you ever have more threads than cores, you have waste. That waste takes two forms. First is the overhead of the extra threads themselves. Second is the time spent switching between threads. Both are probably minor, but they are there.

    Ideally, you have a single thread per core, and each of those threads is running at 100% processing speed on their core. Task switching wouldn't occur in the ideal. Of course there is the OS, but if you take a 16 core machine and leave 2-3 threads for the OS, then the remaining 13-14 go towards your app. Those threads can switch what they are doing within your app, like when they are blocked by IO requirements, but don't have to pay that cost at the OS level. Write it right into your app.

    An excellent example of this scaling is seen in SEDA http://www.eecs.harvard.edu/~mdw/proj/seda/ . It showed much better scaling under load than a standard thread-per-request model.

    My personal experience is with Netty. I had a simple app. I implemented it well in both Tomcat and Netty. Then I load tested it with 100s of concurrent requests (upwards of 800 I believe). Eventually Tomcat slowed to a crawl and exhibited extremely bursty/laggy behavior. Whereas the Netty implementation simply increased response time, but continued with incredibly overall throughput.

    Please note, this hinges on solid implementation. NIO is still getting better with time. We are learning how to tune our servers OSes to work better with it as well as how to implement the JVMs to better leverage the OS functionality. I don't think a winner can be declared yet, but I believe NIO will be the eventual winner, and it's doing quite well already.

提交回复
热议问题