-
Notifications
You must be signed in to change notification settings - Fork 569
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
xhr poll error: using cluster #300
Comments
Can confirm this issue. What is even stranger is that if you use a cluster size of 2 there is no problem but with 3 ore more the problem starts to happen. |
Interesting find, but the reason this is happening is the same reason we require sticky sessions on load balancers when running engine.io servers. The reason for the xhr poll error is because the different poll requests are being sent to different cluster backends. Each cluster backend is a separate nodejs process and does not share memory with the other process. What happens is that the session is established with the first request (and session id assigned) but future requests get routed to a different process which does not know about the session id. Further, the actual error from the response is being masked by the 'xhr poll error' hardcoded string. Upon inspecting the responseText, the following message is shown:
This is an amusing way to expose the fact that we require sticky sessions so that requests can be routed to the correct backend that is aware of active session ids. If you want to use cluster, you will need an adapter on top of engine.io server that will share session ids and session data between servers or avoid using cluster and instead run multiple separate processes behind a load balancer which supports sticky sessions. I think we should update our README/docs/guide to mention that cluster should be avoided due to this limitation. We should also pass along the response text error so that debugging this is easier in the future. |
Additional references: https://github.com/indutny/sticky-session (tho it may not work 100%, but a good starting point for an engine.io-cluster-support module) |
sticky-session does not help if project runs on Heroku with several dynos. And this became show-stopper for horizontal scaling in our app :( What happens is:
This is so far the cause of the problem. Any suggestions how to handle this will be very helpful. Thanks in advance. |
@neemah Just get off Heroku and use hosting provider that actually supports real-time applications (and has a load balancer that uses sticky sessions). |
@defunctzombie |
@3rd-Eden yep, that is why we don't recommend it outright |
@3rd-Eden there are problems with using amazon as well since their ELB doesn't support HTTP 1.1 so you have to pick between having websockets (tcp load balance) or polling (http with sticky). |
@3rd-Eden i'd be very pleased if you suggest one that will handle sticky-session. |
HAProxy, nginx, http-proxy(node) and many others.
|
Is there any reason we need sticky sessions? It's an anti pattern. I would like to store the session information in a distributed database such as cassandra. If someone could point me in the right direction I would be willing to develop a module to do this with a cassandra data store. It would help our application horizontally scale on AWS. |
@3rd-Eden That is why I build https://github.com/wzrdtales/socket-io-sticky-session to support hashing informations from layer 4 instead. But I also would prefer to be able to use something else than sticky sessions, with layer 4 information it is now also possible to balance in a bit more controlled behavior, but the best thing would be to be able to just balance clients without caring to much about the handshake. Thus the best option would be if engine.io would finally support a handshake that works across servers. For example in combination with a storage in between like redis. |
JSONP transport fails when sending JSON stringified message
For future readers: I think it is implemented this way because without sticky session you would have something like this: Since the event handlers are registered upon connection (in the current implementation, at least), any subsequent HTTP request should be forwarded to the 1st instance, but that wouldn't scale well, would it? Same with outgoing packets, if you call Besides, we have published Sample usage: const cluster = require("cluster");
const http = require("http");
const { Server } = require("socket.io");
const redisAdapter = require("socket.io-redis");
const numCPUs = require("os").cpus().length;
const { setupMaster, setupWorker } = require("@socket.io/sticky");
if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);
const httpServer = http.createServer();
setupMaster(httpServer, {
loadBalancingMethod: "least-connection", // either "random", "round-robin" or "least-connection"
});
httpServer.listen(3000);
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on("exit", (worker) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork();
});
} else {
console.log(`Worker ${process.pid} started`);
const httpServer = http.createServer();
const io = new Server(httpServer);
io.adapter(redisAdapter({ host: "localhost", port: 6379 }));
setupWorker(io);
io.on("connection", (socket) => {
/* ... */
});
} The documentation was updated accordingly: https://socket.io/docs/v3/using-multiple-nodes/#Using-Node-JS-Cluster |
server.js
client.js
engine.io version: 1.4.3
engine.io-client version: 1.4.3
node.js version: 0.10.34
os: window 8.1 64bit
c++ complier: Microsoft Visual Studio Community 2013 Visual C++ 2013
The text was updated successfully, but these errors were encountered: