Nginx Reverse Proxy WebSocket Timeout

最后都变了- 提交于 2020-07-07 14:32:23

问题


I'm using java-websocket for my websocket needs, inside a wowza application, and using nginx for ssl, proxying the requests to java.

The problem is that the connection seems to be cut after exactly 1 hour, server-side. The client-side doesn't even know that it was disconnected for quite some time. I don't want to just adjust the timeout on nginx, I want to understand why the connection is being terminated, as the socket is functioning as usual until it isn't.

EDIT: Forgot to post the configuration:

location /websocket/ {
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    include conf.d/proxy_websocket;
    proxy_connect_timeout 1d;
    proxy_send_timeout 1d;
    proxy_read_timeout 1d;
}

And that included config:

proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass                      http://127.0.0.1:1938/;
  • Nginx/1.12.2
  • CentOS Linux release 7.5.1804 (Core)
  • Java WebSocket 1.3.8 (GitHub)

回答1:


The timeout could be coming from the client, nginx, or the back-end. When you say that it is being cut "server side" I take that to mean that you have demonstrated that it is not the client. Your nginx configuration looks like it shouldn't timeout for 1 day, so that leaves only the back-end.

Test the back-end directly

My first suggestion is that you try connecting directly to the back-end and confirm that the problem still occurs (taking nginx out of the picture for troubleshooting purposes). Note that you can do this with command line utilities like curl, if using a browser is not practical. Here is an example test command:

time curl --trace-ascii curl-dump.txt -i -N \
  -H "Host: example.com" \
  -H "Connection: Upgrade" \
  -H "Upgrade: websocket" \
  -H "Sec-WebSocket-Version: 13" \
  -H "Sec-WebSocket-Key: BOGUS+KEY+HERE+IS+FINE==" \
  http://127.0.0.1:8080

In my (working) case, running the above example stayed open indefinitely (I stopped with Ctrl-C manually) since neither curl nor my server was implementing a timeout. However, when I changed this to go through nginx as a proxy (with default timeout of 1 minute) as shown below I saw a 504 response from nginx after almost exactly 1 minute.

time curl -i -N --insecure \
  -H "Host: example.com" \
  https://127.0.0.1:443/proxied-path
HTTP/1.1 504 Gateway Time-out
Server: nginx/1.14.2
Date: Thu, 19 Sep 2019 21:37:47 GMT
Content-Type: text/html
Content-Length: 183
Connection: keep-alive

<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx/1.14.2</center>
</body>
</html>

real    1m0.207s
user    0m0.048s
sys 0m0.042s

Other ideas

Someone mentioned trying proxy_ignore_client_abort but that shouldn't make any difference unless the client is closing the connection. Besides, although that might keep the inner connection open I don't think it is able to keep the end-to-end stream intact.

You may want to try proxy_socket_keepalive, though that requires nginx >= 1.15.6.

Finally, there's a note in the WebSocket proxying doc that hints at a good solution:

Alternatively, the proxied server can be configured to periodically send WebSocket ping frames to reset the timeout and check if the connection is still alive.

If you have control over the back-end and want connections to stay open indefinitely, periodically sending "ping" frames to the client (assuming a web browser is used then no change is needed on the client-side as it is implemented as part of the spec) should prevent the connection from being closed due to inactivity (making proxy_read_timeout unnecessary) no matter how long it's open or how many middle-boxes are involved.




回答2:


Most likely it's because your configuration for the websocket proxy needs tweaking a little, but since you asked:

There are some challenges that a reverse proxy server faces in supporting WebSocket. One is that WebSocket is a hop‑by‑hop protocol, so when a proxy server intercepts an Upgrade request from a client it needs to send its own Upgrade request to the backend server, including the appropriate headers. Also, since WebSocket connections are long lived, as opposed to the typical short‑lived connections used by HTTP, the reverse proxy needs to allow these connections to remain open, rather than closing them because they seem to be idle.

Within your location directive which handles your websocket proxying you need to include the headers, this is the example Nginx give:

location /wsapp/ {
    proxy_pass http://wsbackend;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "Upgrade";
}

This should now work because:

NGINX supports WebSocket by allowing a tunnel to be set up between a client and a backend server. For NGINX to send the Upgrade request from the client to the backend server, the Upgrade and Connection headers must be set explicitly, as in this example

I'd also recommend you have a look at the Nginx Nchan module which adds websocket functionality directly into Nginx. Works well.



来源:https://stackoverflow.com/questions/52078122/nginx-reverse-proxy-websocket-timeout

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!