I have Puma running as the upstream app server and Riak as my background db cluster. When I send a request that map-reduces a chunk of data for about 25K users and returns i
I think this error can happen for various reasons, but it can be specific to the module you're using. For example I saw this using the uwsgi module, so had to set "uwsgi_read_timeout".
You should always refrain from increasing the timeouts, I doubt your backend server response time is the issue here in any case.
I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here: https://stackoverflow.com/a/36589120/479632
server {
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
# these two lines here
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://localhost:5000;
}
}
Unfortunately I can't explain why this works and didn't manage to decipher it from the docs mentioned in the answer linked either so if anyone has an explanation I'd be very interested to hear it.
This happens because your upstream takes too much to answer the request and NGINX thinks the upstream already failed in processing the request, so it responds with an error.
Just include and increase proxy_read_timeout in location
config block.
Same thing happened to me and I used 1 hour timeout for an internal app at work:
proxy_read_timeout 3600;
With this, NGINX will wait for an hour (3600s) for its upstream to return something.
First figure out which upstream is slowing by consulting the nginx error log file and adjust the read time out accordingly in my case it was fastCGI
2017/09/27 13:34:03 [error] 16559#16559: *14381 upstream timed out (110: Connection timed out) while reading response header from upstream, client:xxxxxxxxxxxxxxxxxxxxxxxxx", upstream: "fastcgi://unix:/var/run/php/php5.6-fpm.sock", host: "xxxxxxxxxxxxxxx", referrer: "xxxxxxxxxxxxxxxxxxxx"
So i have to adjust the fastcgi_read_timeout in my server configuration
location ~ \.php$ {
fastcgi_read_timeout 240;
...
}
See: original post
Hopefully it helps someone: I ran into this error and the cause was wrong permission on the log folder for phpfpm, after changing it so phpfpm could write to it, everything was fine.
For proxy_upstream
timeout, I tried the above setting but these didn't work.
Setting resolver_timeout
worked for me, knowing it was taking 30s to produce the upstream timeout message. E.g. me.atwibble.com could not be resolved (110: Operation timed out).
http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver_timeout