Node.js on multi-core machines

前端 未结 15 1311
猫巷女王i
猫巷女王i 2020-11-22 07:46

Node.js looks interesting, BUT I must miss something - isn\'t Node.js tuned only to run on a single process and thread?

Then how does it scale for m

15条回答
  •  粉色の甜心
    2020-11-22 08:36

    [This post is up-to-date as of 2012-09-02 (newer than above).]

    Node.js absolutely does scale on multi-core machines.

    Yes, Node.js is one-thread-per-process. This is a very deliberate design decision and eliminates the need to deal with locking semantics. If you don't agree with this, you probably don't yet realize just how insanely hard it is to debug multi-threaded code. For a deeper explanation of the Node.js process model and why it works this way (and why it will NEVER support multiple threads), read my other post.

    So how do I take advantage of my 16 core box?

    Two ways:

    • For big heavy compute tasks like image encoding, Node.js can fire up child processes or send messages to additional worker processes. In this design, you'd have one thread managing the flow of events and N processes doing heavy compute tasks and chewing up the other 15 CPUs.
    • For scaling throughput on a webservice, you should run multiple Node.js servers on one box, one per core and split request traffic between them. This provides excellent CPU-affinity and will scale throughput nearly linearly with core count.

    Scaling throughput on a webservice

    Since v6.0.X Node.js has included the cluster module straight out of the box, which makes it easy to set up multiple node workers that can listen on a single port. Note that this is NOT the same as the older learnboost "cluster" module available through npm.

    if (cluster.isMaster) {
      // Fork workers.
      for (var i = 0; i < numCPUs; i++) {
        cluster.fork();
      }
    } else {
      http.Server(function(req, res) { ... }).listen(8000);
    }
    

    Workers will compete to accept new connections, and the least loaded process is most likely to win. It works pretty well and can scale up throughput quite well on a multi-core box.

    If you have enough load to care about multiple cores, then you are going to want to do a few more things too:

    1. Run your Node.js service behind a web-proxy like Nginx or Apache - something that can do connection throttling (unless you want overload conditions to bring the box down completely), rewrite URLs, serve static content, and proxy other sub-services.

    2. Periodically recycle your worker processes. For a long-running process, even a small memory leak will eventually add up.

    3. Setup log collection / monitoring


    PS: There's a discussion between Aaron and Christopher in the comments of another post (as of this writing, its the top post). A few comments on that:

    • A shared socket model is very convenient for allowing multiple processes to listen on a single port and compete to accept new connections. Conceptually, you could think of preforked Apache doing this with the significant caveat that each process will only accept a single connection and then die. The efficiency loss for Apache is in the overhead of forking new processes and has nothing to do with the socket operations.
    • For Node.js, having N workers compete on a single socket is an extremely reasonable solution. The alternative is to set up an on-box front-end like Nginx and have that proxy traffic to the individual workers, alternating between workers for assigning new connections. The two solutions have very similar performance characteristics. And since, as I mentioned above, you will likely want to have Nginx (or an alternative) fronting your node service anyways, the choice here is really between:

    Shared Ports: nginx (port 80) --> Node_workers x N (sharing port 3000 w/ Cluster)

    vs

    Individual Ports: nginx (port 80) --> {Node_worker (port 3000), Node_worker (port 3001), Node_worker (port 3002), Node_worker (port 3003) ...}

    There are arguably some benefits to the individual ports setup (potential to have less coupling between processes, have more sophisticated load-balancing decisions, etc.), but it is definitely more work to set up and the built-in cluster module is a low-complexity alternative that works for most people.

提交回复
热议问题