Nginx upstream prematurely closed connection while reading response header from upstream, for large requests

前端 未结 9 984
無奈伤痛
無奈伤痛 2020-12-24 04:27

I am using nginx and node server to serve update requests. I get a gateway timeout when I request an update on large data. I saw this error from the nginx error logs :

<
相关标签:
9条回答
  • 2020-12-24 05:04

    You can increase the timeout in node like so.

    app.post('/slow/request', function(req, res){ req.connection.setTimeout(100000); //100 seconds ... }

    0 讨论(0)
  • 2020-12-24 05:04

    In my case, I tried to increase the timeout in the configuration file, but did not work. Later turned out it was working when filtering for less data to display on one page. In the views.py, I just added " & Q(year=2019)" to only display the data for the year 2019. BTW, a permanent fix would be using Pagination.

    def list_offers(request, list_type):
    context = {}
    context['list_type'] = list_type
    if list_type == 'ready':
        context['menu_page'] = 'ready'
        offer_groups = OfferGroup.objects.filter(~Q(run_status=OfferGroup.DRAFT) & Q(year=2019)).order_by('-year', '-week')
    
    context['grouped_offers'] = offer_groups
    
    return render(request, 'app_offers/list_offers.html', context)
    
    0 讨论(0)
  • 2020-12-24 05:07

    I meet the same problem and no one of the solutions detailed here worked for me ... First of all I had an error 413 Entity too large so I updated my nginx.conf as following :

    http {
            # Increase request size
            client_max_body_size 10m;
    
            ##
            # Basic Settings
            ##
    
            sendfile on;
            tcp_nopush on;
            tcp_nodelay on;
            keepalive_timeout 65;
            types_hash_max_size 2048;
            # server_tokens off;
    
            # server_names_hash_bucket_size 64;
            # server_name_in_redirect off;
    
            include /etc/nginx/mime.types;
            default_type application/octet-stream;
    
            ##
            # SSL Settings
            ##
    
            ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
            ssl_prefer_server_ciphers on;
    
            ##
            # Logging Settings
            ##
    
            access_log /var/log/nginx/access.log;
            error_log /var/log/nginx/error.log;
    
            ##
            # Gzip Settings
            ##
    
            gzip on;
    
            # gzip_vary on;
            # gzip_proxied any;
            # gzip_comp_level 6;
            # gzip_buffers 16 8k;
            # gzip_http_version 1.1;
            # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    
            ##
            # Virtual Host Configs
            ##
    
            include /etc/nginx/conf.d/*.conf;
            include /etc/nginx/sites-enabled/*;
    
            ##
            # Proxy settings
            ##
            proxy_connect_timeout 1000;
            proxy_send_timeout 1000;
            proxy_read_timeout 1000;
            send_timeout 1000;
    }
    

    So I only updated the http part, and now I meet the error 502 Bad Gateway and when I display /var/log/nginx/error.log I got the famous "upstream prematurely closed connection while reading response header from upstream"

    What is really mysterious for me is that the request works when I run it with virtualenv on my server and send the request to the : IP:8000/nameOfTheRequest

    Thanks for reading

    0 讨论(0)
  • 2020-12-24 05:09

    I got the same error, here is how I resolved it:

    • Downloaded logs from AWS.
    • Reviewed Nginx logs, no additional details as above.
    • Reviewed node.js logs, AccessDenied AWS SDK permissions error.
    • Checked the S3 bucket that AWS was trying to read from.
    • Added additional bucket with read permission to correct server role.

    Even though I was processing large files there were no other errors or settings I had to change once I corrected the missing S3 access.

    0 讨论(0)
  • 2020-12-24 05:11

    I don't think this is your case, but I'll post it if it helps anyone. I had the same issue and the problem was that Node didn't respond at all (I had a condition that when failed didn't do anything - so no response) - So if increasing all your timeouts didn't solve it, make sure all scenarios get a response.

    0 讨论(0)
  • 2020-12-24 05:16

    I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?

    0 讨论(0)
提交回复
热议问题