I\'m trying to create a multiplayer game with NodeJS and I want to synchronize the action between clients.
What would be the best way to find the latency (the time that
Heres my really quick and dirty script to test the ping ... just head to http://yourserver:8080 in your browser and watch the console (ssh terminal for me).
var http = require('http');
var io = require('socket.io');
server = http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('<html>\n');
res.write(' <head>\n');
res.write(' <title>Node Ping</title>\n');
res.write(' <script src="/socket.io/socket.io.js"></script>\n');
res.write(' <script>\n');
res.write(' var socket = new io.Socket();\n');
res.write(' socket.on("connect",function(){ });\n');
res.write(' socket.on("message",function(){ socket.send(1); });\n');
res.write(' socket.connect();\n');
res.write(' </script>\n');
res.write(' </head>\n');
res.write(' <body>\n');
res.write(' <h1>Node Ping</h1>\n');
res.write(' </body>\n');
res.write('</html>\n');
res.end();
});
server.listen(8080);
console.log('Server running at http://127.0.0.1:8080/');
var socket = io.listen(server);
socket.on('connection',function(client){
var start = new Date().getTime();
client.send(1);
client.on('message',function(message){ client.send(1); console.log( new Date$
client.on('disconnect',function(){});
});
I'm very curious about this because it seems like my pings are pretty high(200-400ms round trip) on large vps boxes w/ dedicated resources both in california and new jersey. (I'm on the east coast) I'm betting theres just a lot of latency on the vps boxes b/c they're serving so much traffic?
The thing that gets me is that a regular ping from the linux terminal from the same client to the same server is 11ms on average a factor of 10 lower ... am I doing something wrong or is something slow with node.js/socket.io/websockets?
...I still wasn't satisfied. I visited the official docs and well, well, well - the solution is already built-in.
You just need to implement it - check out mine:
// (Connect to socket).
var latency = 0;
socket.on('pong', function(ms) {
latency = ms;
console.log(latency);
});
// Do cool things, knowing the latency...
var server = require('http').Server(app);
// "socket.io": "^1.7.1"
// Set pingInterval to whatever you want - 'pong' gets emitted for you!
var io = require('socket.io')(server, {pingInterval: 5000});
After socket.io connection has been established, you create a new Date
object on the client, let's call it startTime
. This is your initial time before making a request to the server. You then emit a ping
event from the client. Naming convention is totally up to you. Meanwhile server should be listening for a ping
event, and when it receives the ping
, it immediately emits a pong
event. Client then catches the pong
event. At this time you want to create another date object that represents Date.now()
. So at this point you have two date objects - initial date before making a request to the server, and another date object after you make a request to the server and it replied. Subtract the startTime
from current time and you have the latency
.
var socket = io.connect('http://localhost');
var startTime;
setInterval(function() {
startTime = Date.now();
socket.emit('ping');
}, 2000);
socket.on('pong', function() {
latency = Date.now() - startTime;
console.log(latency);
});
io.sockets.on('connection', function (socket) {
socket.on('ping', function() {
socket.emit('pong');
});
});
Also available as a Github Gist.
Read first — Due to repeated questions why this is supposed to work, let me clarify a bit.
start
variable containing the time stamp. This is the ack() argument in socket.io.socket.io
allows to define a callback funtion, which appears to be executed by the server, but this actually just passes the function arguments through the web socket, and the client then calls the callback.What happens below (please do check the example code!):
1453213686429
in start
ping
event to the server and is waiting for an answerclientCallback
with empty arguments (Check the demo code if you want to see arguments)clientCallback
again takes the current timestamp on the client, e.g. 1453213686449
, and knows that 20 ms
have passed since it sent the request.Imagine the druid (client) holding a stopwatch and pushing the button when the messenger (event) starts running, and pushing it again when the messenger arrives with his scroll (function arguments). The druid then reads the scroll and adds the ingredient names to his potion recipe and brews the potion. (callback)
Okay, forget the previous paragraph, I guess you got the point.
Although the question has already been answered, here a short implementation for checking the RTT with socket.io
:
var start = Date.now();
this.socket.emit( 'ping', function clientCallback() {
console.log( 'Websocket RTT: ' + (Date.now() - start) + ' ms' );
} );
socket.on( 'ping', function ( fn ) {
fn(); // Simply execute the callback on the client
} );
Demo code as node module: socketIO-callback.tgz Set it up and run it with
npm install
node callback.js
and then navigate to http://localhost:5060
What I usually do to send timestamp with request:
new Date()
and send timestamp: date.getTime()
to the server, with every JSON request.processed: (new Date()).getTime()
in the object.timestamp
from the request, and a new processed field: processed: (new Date()).getTime() - req.processed
that now contains the number of milliseconds it took to process the request.timestamp
(which is the same that was sent on pt 1) and subtract it from the current time, and subtract processing time (processed
), and there is your "real" ping time in milliseconds.I think you should always include the time for both request and response in the ping time, even if there is one-way communication. This is because that is the standard meaning behind "ping time" and "latency". And if it is one-way communication and the latency is only half of the real ping time, that's just a "good thing".
I'm going to assume you are using WebSockets or Socket.IO since you are implementing a game where latency matters (and you tagged it as such).
I would think the server should probably measure and keep track of this for each client.
You probably want to implement some sort of ping action that the server can request of the client. As soon as the client receives the request, it sends back a response to the server. The server then divides by 2 and updates the latency for that client. You probably want the server to do this periodically with each client and probably average the last several so that you don't get strange behavior from sudden but temporary spikes.
Then, when there is a message from one client that needs to be sent (or broadcast) to another client, the server can add client1's latency to client2's latency and communicate this as the latency offset to client2 as part of the message. client2 will then know that the event on client1 happened that many milliseconds ago.
An additional reason to do this on the server is that some browser Javascript timestamps are inaccurate: http://ejohn.org/blog/accuracy-of-javascript-time/. I suspect node.js timestamps are just as accurate (or more so) than V8 (which is one of the few accurate ones).