Node.js slower than Apache

前端 未结 6 889
误落风尘
误落风尘 2020-12-13 10:25

I am comparing performance of Node.js (0.5.1-pre) vs Apache (2.2.17) for a very simple scenario - serving a text file.

Here\'s the code I use for node server:

<
相关标签:
6条回答
  • 2020-12-13 10:38

    Dynamic requests

    node.js is very good at handling at lot small dynamic requests(which can be hanging/long-polling). But it is not good at handling large buffers. Ryan Dahl(Author node.js) explained this one of his presentations. I recommend you to study these slides. I also watched this online somewhere.

    Garbage Collector

    As you can see from slide(13 from 45) it is bad at big buffers.

    Slide 15 from 45:

    V8 has a generational garbage collector. Moves objects around randomly. Node can’t get a pointer to raw string data to write to socket.

    Use Buffer

    Slide 16 from 45

    Using Node’s new Buffer object, the results change.

    Still not that good as for example nginx, but a lot better. Also these slides are pretty old so probably Ryan has even improved this.

    CDN

    Still I don't think you should be using node.js to host static files. You are probably better of hosting them on a CDN which is optimized for hosting static files. Some popular CDN's(some even free for) via WIKI.

    NGinx(+Memcached)

    If you don't want to use CDN to host your static files I recommend you to use Nginx with memcached instead which is very fast.

    0 讨论(0)
  • 2020-12-13 10:39

    Really all you're doing here is getting the system to copy data between buffers in memory, in different process's address spaces - the disk cache means you aren't really touching the disk, and you're using local sockets.

    So the fewer copies have to be done per request, the faster it goes.

    Edit: I suggested adding caching, but in fact I see now you're already doing that - you read the file once, then start the server and send back the same buffer each time.

    Have you tried appending the header part to the file data once upfront, so you only have to do a single write operation for each request?

    0 讨论(0)
  • 2020-12-13 10:44

    In the below benchmarks,

    Apache:

    $ apache2 -version
    Server version: Apache/2.2.17 (Ubuntu)
    Server built:   Feb 22 2011 18:35:08
    

    PHP APC cache/accelerator is installed.

    Test run on my laptop, a Sager NP9280 with Core I7 920, 12G of RAM.

    $ uname -a
    Linux presto 2.6.38-8-generic #42-Ubuntu SMP Mon Apr 11 03:31:24 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux
    

    KUbuntu natty

    0 讨论(0)
  • 2020-12-13 10:48

    The result of your benchmark can change in favor of node.js if you increase the concurrency and use cache in node.js

    A sample code from the book "Node Cookbook":

    var http = require('http');
    var path = require('path');
    var fs = require('fs');
    var mimeTypes = {
        '.js' : 'text/javascript',
        '.html': 'text/html',
        '.css' : 'text/css'
    } ;
    var cache = {};
    function cacheAndDeliver(f, cb) {
        if (!cache[f]) {
            fs.readFile(f, function(err, data) {
                if (!err) {
                    cache[f] = {content: data} ;
                }
                cb(err, data);
            });
            return;
        }
        console.log('loading ' + f + ' from cache');
        cb(null, cache[f].content);
    }
    http.createServer(function (request, response) {
        var lookup = path.basename(decodeURI(request.url)) || 'index.html';
        var f = 'content/'+lookup;
        fs.exists(f, function (exists) {
            if (exists) {
                fs.readFile(f, function(err,data) {
                    if (err) { response.writeHead(500);
                        response.end('Server Error!'); return; }
                        var headers = {'Content-type': mimeTypes[path.extname(lookup)]};
                        response.writeHead(200, headers);
                        response.end(data);
                    });
                return;
            }
    response.writeHead(404); //no such file found!
    response.end('Page Not Found!');
    });
    
    0 讨论(0)
  • 2020-12-13 10:55

    In this scenario Apache is probably doing sendfile which result in kernel sending chunk of memory data (cached by fs driver) directly to socket. In the case of node there is some overhead in copying data in userspace between v8, libeio and kernel (see this great article on using sendfile in node)

    There are plenty possible scenarios where node will outperform Apache, like 'send stream of data with constant slow speed to as many tcp connections as possible'

    0 讨论(0)
  • 2020-12-13 11:00
    $ cat /var/www/test.php
    <?php
    for ($i=0; $i<10; $i++) {
            echo "hello, world\n";
    }
    
    
    $ ab -r -n 100000 -k -c 50 http://localhost/test.php
    This is ApacheBench, Version 2.3 <$Revision: 655654 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking localhost (be patient)
    Completed 10000 requests
    Completed 20000 requests
    Completed 30000 requests
    Completed 40000 requests
    Completed 50000 requests
    Completed 60000 requests
    Completed 70000 requests
    Completed 80000 requests
    Completed 90000 requests
    Completed 100000 requests
    Finished 100000 requests
    
    
    Server Software:        Apache/2.2.17
    Server Hostname:        localhost
    Server Port:            80
    
    Document Path:          /test.php
    Document Length:        130 bytes
    
    Concurrency Level:      50
    Time taken for tests:   3.656 seconds
    Complete requests:      100000
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    100000
    Total transferred:      37100000 bytes
    HTML transferred:       13000000 bytes
    Requests per second:    27350.70 [#/sec] (mean)
    Time per request:       1.828 [ms] (mean)
    Time per request:       0.037 [ms] (mean, across all concurrent requests)
    Transfer rate:          9909.29 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.1      0       3
    Processing:     0    2   2.7      0      29
    Waiting:        0    2   2.7      0      29
    Total:          0    2   2.7      0      29
    
    Percentage of the requests served within a certain time (ms)
      50%      0
      66%      2
      75%      3
      80%      3
      90%      5
      95%      7
      98%     10
      99%     12
     100%     29 (longest request)
    
    $ cat node-test.js 
    var http = require('http');
    http.createServer(function (req, res) {
              res.writeHead(200, {'Content-Type': 'text/plain'});
                res.end('Hello World\n');
    }).listen(1337, "127.0.0.1");
    console.log('Server running at http://127.0.0.1:1337/');
    
    $ ab -r -n 100000 -k -c 50 http://localhost:1337/
    This is ApacheBench, Version 2.3 <$Revision: 655654 $>
    Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
    Licensed to The Apache Software Foundation, http://www.apache.org/
    
    Benchmarking localhost (be patient)
    Completed 10000 requests
    Completed 20000 requests
    Completed 30000 requests
    Completed 40000 requests
    Completed 50000 requests
    Completed 60000 requests
    Completed 70000 requests
    Completed 80000 requests
    Completed 90000 requests
    Completed 100000 requests
    Finished 100000 requests
    
    
    Server Software:        
    Server Hostname:        localhost
    Server Port:            1337
    
    Document Path:          /
    Document Length:        12 bytes
    
    Concurrency Level:      50
    Time taken for tests:   14.708 seconds
    Complete requests:      100000
    Failed requests:        0
    Write errors:           0
    Keep-Alive requests:    0
    Total transferred:      7600000 bytes
    HTML transferred:       1200000 bytes
    Requests per second:    6799.08 [#/sec] (mean)
    Time per request:       7.354 [ms] (mean)
    Time per request:       0.147 [ms] (mean, across all concurrent requests)
    Transfer rate:          504.62 [Kbytes/sec] received
    
    Connection Times (ms)
                  min  mean[+/-sd] median   max
    Connect:        0    0   0.1      0       3
    Processing:     0    7   3.8      7      28
    Waiting:        0    7   3.8      7      28
    Total:          1    7   3.8      7      28
    
    Percentage of the requests served within a certain time (ms)
      50%      7
      66%      9
      75%     10
      80%     11
      90%     12
      95%     14
      98%     16
      99%     17
     100%     28 (longest request)
    
    $ node --version
    v0.4.8
    
    0 讨论(0)
提交回复
热议问题