I'm receiving duplicate messages in my clustered node.js/socket.io/redis pub/sub application

后端 未结 2 1952
旧巷少年郎
旧巷少年郎 2021-02-04 18:02

I\'m using Node.js, Socket.io with Redisstore, Cluster from the Socket.io guys, and Redis.

I\'ve have a pub/sub application that works well on just one Node.js node. But

相关标签:
2条回答
  • 2021-02-04 18:37

    Turns out this isn't a problem with Node.js/Socket.io, I was just going about it the completely wrong way.

    Not only was I publishing into the Redis server from outside the Node/Socket stack, I was still directly subscribed to the Redis channel. On both ends of the pub/sub situation I was bypassing the "Socket.io cluster with Redis Store on the back end" goodness.

    So, I created a little app (with Node.js/Socket.io/Express) that took messages from my Rails app and 'announced' them into a Socket.io room using the socket.io-announce module. Now, by using Socket.io routing magic, each node worker would only get and send messages to browsers connected to them directly. In other words, no more duplicate messages since both the pub and sub happened within the Node.js/Socket.io stack.

    After I get my code cleaned up I'll put an example up on a github somewhere.

    0 讨论(0)
  • 2021-02-04 18:46

    I've been battling with cluster and socket.io. Every time I use cluster function (I use the built in Nodejs cluster though) I get alot of performance problems and issues with socket.io.

    While trying to research this, I've been digging around the bug reports and similar on the socket.io git and anyone using clusters or external load balancers to their servers seems to have problems with socket.io.

    It seems to produce the problem "client not handshaken client should reconnect" which you will see if you increase the verbose logging. This appear alot whenever socket.io runs in a cluster so I think it reverts back to this. I.E the client gets connected to randomized instance in the socket.io cluster every time it does a new connection (it does several http/socket/flash connections when authorizing and more all the time later when polling for new data).

    For now I've reverted back to only using 1 socket.io process at a time, this might be a bug but could also be a shortcoming of how socket.io is built.

    Added: My way of solving this in the future will be to assign a unique port to each socket.io instance inside the cluster and then cache port selection on client side.

    0 讨论(0)
提交回复
热议问题