How frequently can I send data using Socket.IO?

前端 未结 2 589
甜味超标
甜味超标 2021-02-04 11:52

I\'m creating a web application that would require small amounts of data (3 integer values per socket) to be sent from the server to the client very frequently and I wanted to s

相关标签:
2条回答
  • 2021-02-04 12:10

    How frequently can I send data using Socket.IO?

    Is there a limit to how often I can send new data using Socket.IO?

    There is no coded limit. It will only depend upon your ability to process the messages at both ends and the bandwidth to deliver them. If you really wanted to know your own limit with the hardware, network and OS you're running, you'd have to devise a test to send rapid fire packets of a representative size and see how many you can send in a second where they all get to the destination and no errors are seen on either end.

    With the ideal number being 200 socket connections sending 50 updates per second.

    Your server would need to be able to send 10,000 messages per second and each client would need to be able to process 50 incoming messages per second. That's all theoretically possible with the right hardware and right network connection.

    But, 50 updates per second sounds like it's probably both unnecessary and inefficient. No end-user is going to perceive some change every 20ms which is what 50 updates per second comes down to. So, it would be a LOT more efficient to batch your updates to each client into maybe 10 updates per second.

    I calculated that if each data packet being sent is approximately 500 bytes, then I would be able to send 20 updates a second to 100 connections on a 1 MB/s connection.

    That type of calculation only works for sending large blocks of data. For lots of small messages, there are lots of inefficiencies as the TCP packet overhead and the webSocket/socket.io overhead for lots of small messages starts to become a measurable percentage of the overall bandwidth consumption and because TCP is a reliable protocol, there are also ACKs flowing back and forth to acknowledge delivery. If the packets are small, you probably don't have an overall bandwidth problem, the issue will be more with processing lots of small packets and the overhead to do that.

    If you can combine updates into a smaller number of updates per second, you will get a lot better scalability.

    0 讨论(0)
  • 2021-02-04 12:32

    That's a very system, network and code dependent question.

    Here is a small test harness I've used for similar plain socket.io testing before, I've plugged in some bits to fit your question.

    Server

    const io = require('socket.io')(8082)
    const connections = []
    
    io.on('connection', function(socket){
    
      connections.push(socket);
      const slog = (msg, ...args) => console.log('%s %s '+msg, Date.now(), socket.id, ...args)
      slog('Client connected. Total: %s', connections.length)
    
      socket.on('disconnect', function(data){
        connections.splice(connections.indexOf(socket), 1);
        slog('Client disconnected. Total: %s', connections.length)
      })
    
      socket.on('single', function(data){
        socket.emit('single',[ 0, now, now, now ])
      })
    
      socket.on('start', function(data = {}){
        slog('Start stream', data)
        sendBatch(1, data.count, data.delay)
      })
    
      socket.on('start dump', function(data = {}){
        slog('Start dump', data)
        sendBatch(1, data.count)
      })
    
      function sendBatch(i, max, delay){
        if ( i > max ) return slog('Done batch %s %s', max, delay)
        socket.emit('batch',[ i, now, now, now ])
        if (delay) {
          setTimeout(()=> sendBatch(i++, max, delay), delay)
        } else {
          setImmediate(()=> sendBatch(i++, max))
        }
      }
    
    })
    

    Client

    const io = require('socket.io-client')
    const socket = io('http://localhost:8082', {transports: ['websocket']})
    
    socket.on('connect_error', err => console.error('Socket connect error:', err))
    socket.on('connect_timeout', err => console.error('Socket connect timeout:', err))
    socket.on('reconnect', err => console.error('Socket reconnect:', err))
    socket.on('reconnect_attempt', err => console.error('Socket reconnect attempt:', err))
    socket.on('reconnecting', err => console.error('Socket reconnecting', err))
    socket.on('reconnect_error', err => console.error('Socket reconnect error:', err))
    socket.on('reconnect_failed', err => console.error('Socket reconnect failed:', err))
    
    function batch(n){
      socket.on('batch', function(data){
        if ( data[0] >= n ) {
          let end = Date.now()
          let persec = n / (( end - start ) / 1000)
          console.log('Took %s ms for %s at %s/s', end - start, n, persec.toFixed(1))
          return socket.close()
        }
      })
    }
    
    function startDump(count = 500000){
      socket.emit('start dump', { count: count })
      console.log('Start dump', count)
      batch(count)
    }
    function startStream(count = 50, delay = 1000){
      socket.emit('start', { count: count, delay: delay })
      console.log('Start stream', count, delay)
      batch(count)
    }
    
    function pingIt(i, max = 50){
      socket.on('single', function(data){
        console.log('Got a single with:', data)
        if (i >= max) {
          let end = Date.now()
          let persec = i / (end - start) * 1000
          console.log('Took %s ms %s/s', end - start, persec.toFixed(2))
          return socket.close()
        }
        socket.emit('single', i+=1)
      })
      socket.emit('single', i)
    }
    
    let start = Date.now()
    
    //console.log('args command: %s  count: %s  delay: %s',process.argv[2], process.argv[3], process.argv[4])
    switch(process.argv[2]){
      case 'ping':   pingIt(0, process.argv[3]); break
      case 'stream': startStream(process.argv[3], process.argv[4]); break
      case 'dump':   startDump(process.argv[3]); break
      default:       console.log('ping stream dump'); socket.close()
    }
    

    To test request/response round trips

     node socketio-client.js ping 4
    

    To test throughput, dumping messages as fast as the server can.

     node socketio-client.js dump 100000
    

    To test a stream of 1000 messages with a 18ms delay between each which is about 50 message per second.

     node socketio-client.js stream 1000 18
    

    On my dev machine I can dump about 40000 messages per second to a single localhost client with the 4 integers as a payload (counter + 3 timestamps) on a 2 GHz CPU. Both server and client node processes use 95-100% of a CPU core each. So pure throughput looks ok.

    I can emit 100 messages per second to 100 local clients at 55% CPU usage on the server process.

    I can't get more than 130-140 messages per second to 100 clients out of a single node process on my dev machine.

    A new, high frequency Intel Skylake CPU server might demolish those numbers locally. Add a, possibly flakey, network connection in and it will bring it right back down. Anything other than local network latency is probably going to mess with what you perceive you will gain with such high message rates. Latency jitter on anything slower will play havoc with the "frame rate" of the messages on the client end. Timestamping messages and tracking them on the client would probably be required.

    If you do run into problems there are also lower level websocket libraries like ws that will require more implementation from you but will give you more control over socket connections and you can probably eek more performance out of them.

    The more connections you have the more contention you will get with the rest of your code and socket code. You will probably end up needing to use multiple nodes to keep thing smooth. Cluster can split the app across multiple Node.js processes. You may need something like Redis, ZeroMQ or Nanomsg to manage IPC. V8 in Node 9 supports SharedArrayBuffer and Atomics but not much has landed in Node yet to use them with workers.

    0 讨论(0)
提交回复
热议问题