I need some help from some linux gurus. I am working on a webapp that includes a comet server. The comet server runs on localhost:8080 and exposes the url localhost:8080/long_po
without doing some serious TCP/IP mungling, you can't expose two applications on the same TCP port on the same IP address. once nginx has started to service the connection, it can't pass it to other application, it can only proxy it.
so, either user another port, another IP number (could be on the same physical machine), or live with proxy.
edit: i guess nginx is timing out because it doesn't see any activity for a long time. maybe adding a null message every few minutes could keep the connection from failing.
i don't think, that is possible ...
localhost:8080/long_polling
is a URI
... more exactly, it should be http://localhost:8080/long_polling
... in HTTP
the URI
would be resolved as requesting /long_polling
, to port 80 to the server with at the domain 'localhost' ... that is, opening a tcp-connection to 127.0.0.1:80, and sending
GET /long_polling HTTP/1.1
Host: localhost:8080
plus some additional HTTP headers ... i haven't heard yet, that ports can be bound accross processes ...
actually, if i understand well, nginx was designed to be a scalable proxy ... also, they claim they need 2.5 MB for 10000 HTTP idling connections ... so that really shouldn't be a problem ...
what comet server are you using? could you maybe let the comet server proxy a webserver? normal http requests should be handled quickly ...
greetz
back2dos
Try
proxy_next_upstream error;
The default is
proxy_next_upstream error timeout;
The timeout cannot be more than 75 seconds.
http://wiki.nginx.org/NginxHttpProxyModule#proxy_next_upstream
http://wiki.nginx.org/NginxHttpProxyModule#proxy_connect_timeout
There is now a Comet plugin for Nginx. It will probably solve your issues quite nicely.
http://www.igvita.com/2009/10/21/nginx-comet-low-latency-server-push/
I actually managed to get this working now. Thank you all. The reason nginx was 504 timing out was a silly one: I hadn't included proxy.conf in my nginx.conf like so:
include /etc/nginx/proxy.conf;
So, I'm keeping nginx as a frontend proxy to the COMET server.
Here's my nginx.conf and my proxy.conf. Note however that the proxy.conf is way overkill - I was just setting all these settings while trying to debug my program.
/etc/nginx/nginx.conf
worker_processes 1;
user www-data;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/proxy.conf;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
keepalive_timeout 600;
tcp_nodelay on;
gzip on;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
/etc/nginx/proxy.conf
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
send_timeout 6000;
proxy_buffering off;
proxy_next_upstream error;