I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn\'t throw any er
There are many potential causes and solutions for this problem. In my case, the back-end code was taking too long to run. Modifying these variables fixed it for me.
Nginx:
proxy_connect_timeout
, proxy_send_timeout
, proxy_read_timeout
, fastcgi_send_timeout
, fastcgi_read_timeout
, keepalive_timeout
, uwsgi_read_timeout
, uwsgi_send_timeout
, uwsgi_socket_keepalive
.
uWSGI: limit-post
.
Replace uwsgi_pass 0.0.0.0:5002;
with uwsgi_pass 127.0.0.1:5002;
or better use unix sockets.
It seems many causes can stand behind this error message. I know you are using uwsgi_pass
, but for those having the problem on long requests when using proxy_pass
, setting http-timeout
on uWSGI may help (it is not harakiri setting).
As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
When in the Nginx config you have something like:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini
you have something like (or its equivalent in the command line):
http-socket=:9597
uwsgi will speak http, and the error mentioned in the question appears. See native HTTP support.
A possible fix is to have instead:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
The solution was to either remove keepalive
setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI
's side as well, with --http-keepalive
(available since 1.9).
Change nginx.conf to include
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linux for full example