Chrome net::ERR_INCOMPLETE_CHUNKED_ENCODING error

纵饮孤独 提交于 2019-11-26 20:10:32

OK. I've triple-tested this and I am 100% sure that it is being caused by my anti-virus (ESET NOD32 ANTIVIRUS 5).

Whenever I disable the Real-Time protection, the issue disappears. Today, I left the Real-Time protection off for 6-7 hours and the issue never occurred.

A few moments ago, I switched it back on, only for the problem to surface within a minute.

Over the course of the last 24 hours, I have switched the Real-Time protection on and off again, just to be sure. Each time - the result has been the same.

Update: I have come across another developer who had the exact same problem with the Real-Time protection on his Kaspersky anti-virus. He disabled it and the problem went away. i.e. This issue doesn't seem to be limited to ESET.

The error is trying to say that Chrome was cut off while the page was being sent. Your issue is trying to figure out why.

Apparently, this might be a known issue impacting a couple of versions of Chrome. As far as I can tell, it is an issue of these versions being massively sensitive to the content length of the chunk being sent and the expressed size of that chunk (I could be far off on that one). In short, a slightly imperfect headers issue.

On the other hand, it could be that the server does not send the terminal 0-length chunk. Which might be fixable with ob_flush();. It is also possible that Chrome (or connection or something) is being slow. So when the connection is closed, the page is not yet loaded. I have no idea why this might happen.

Here is the paranoid programmers answer:

<?php
    // ... your code
    flush();
    ob_flush();
    sleep(2);
    exit(0);
?>

In your case, it might be a case of the script timing out. I am not really sure why it should affect only you but it could be down to a bunch of race conditions? That's an utter guess. You should be able to test this by extending the script execution time.

<?php
    // ... your while code
    set_time_limit(30);
    // ... more while code
?>

It also may be as simple as you need to update your Chrome install (as this problem is Chrome specific).

UPDATE: I was able to replicate this error (at last) when a fatal error was thrown while PHP (on the same localhost) was output buffering. I imagine the output was too badly mangled to be of much use (headers but little or no content).

Specifically, I accidentally had my code recursively calling itself until PHP, rightly, gave up. Thus, the server did not send the terminal 0-length chunk - which was the problem I identified earlier.

SimonAlfie

I had this issue. Tracked it down after trying most the other answers on this question. It was caused by the owner and permissions of the /var/lib/nginx and more specifically the /var/lib/nginx/tmp directory being incorrect.

The tmp directory is used by fast-cgi to cache responses as they are generated, but only if they are above a certain size. So the issue is intermittent and only occurs when the generated response is large.

Check the nginx <host_name>.error_log to see if you are having permission issues.

To fix, ensure the owner and group of /var/lib/nginx and all sub-dirs is nginx.

The following should fix it for every client.

//Gather output (if it is not already in a variable, use ob_start() and ob_get_clean() )    

// Before sending output:
header('Content-length: ' . strlen($output));

But in my case the following was a better option and fixed it as well:

.htaccess:

php_value opcache.enable 0

OMG, I had the same problem 5 minutes ago. I spent several hours to find a solution. At first sight disabling antivirus solved problem on Windows. But then I noticed issue on other linux pc with no antivirus. No errors in nginx logs. My uwsgi showed something about "Broken pipe" but not on all requests. Know what? It was no space left on device, which I found when restarted server on Database log, and df approved this. Only explanation about why antivirus was solved this is that it prevents browser caching (it should check every request), but browser with some strange behavior can simply ignore bad response and show cached responses.

In my case i was having /usr/local/var/run/nginx/fastcgi_temp/3/07/0000000073" failed (13: Permission denied) which was probably resulting the Chrome net::ERR_INCOMPLETE_CHUNKED_ENCODING error.

I had to remove /usr/local/var/run/nginx/ and let nginx create it again.

$ sudo rm -rf /usr/local/var/run/nginx/
$ sudo nginx -s stop
$ sudo mkdir /usr/local/var/run/nginx/
$ sudo chown nobody:nobody /usr/local/var/run/nginx/
$ sudo nginx

It is known Chrome problem. According to Chrome and Chromium bug trackers there is no universal solution for this. This problem is not related with server type and version, it is right in Chrome.

Setting Content-Encoding header to identity solved this problem to me.

from developer.mozilla.org

identity | Indicates the identity function (i.e. no compression, nor modification).

So, I can suggest, that in some cases Chrome can not perform gzip compress correctly.

Here the problem was my Avast AV. As soon I disabled it, the problem was gone.

But, I really would like to understand the cause of this behavior.

I just stared having a similar problem. And noticed it was only happening when the page contained UTF-8 characters with an ordinal value greater than 255 (i.e. multibyte).

What ended up being the problem was how the Content-Length header was being calculated. The underlying backend was computing character length, rather than byte length. Turning off content-length headers fixed the problem temporarily until I could fix the back end template system.

This was happening on two different clients' servers separated by several years, using the same code that was deployed on hundreds of other servers for that time without issue.

For these clients, it happened mostly on PHP scripts that had streaming HTML - that is, "Connection: close" pages where output was sent to the browser as the output became available.

It turned out that the connection between the PHP process and the web server was dropping prematurely, before the script completed and way before any timeout.

The problem was opcache.fast_shutdown = 1 in the main php.ini file. This directive is disabled by default, but it seems some server administrators believe there is a performance boost to be had here. In all of my tests, I have never noted a positive difference using this setting. In my experience, it has caused some scripts to actually execute more slowly, and has an awful track record of sometimes entering shutdown while the script is still executing, or even at the end of execution while the web server is still reading from the buffer. There is an old bug report from 2013, unresolved as of Feb 2017, which may be related: https://github.com/zendtech/ZendOptimizerPlus/issues/146

I have seen the following errors appear due to this ERR_INCOMPLETE_CHUNKED_ENCODING ERR_SPDY_PROTOCOL_ERROR Sometimes there are correlative segfaults logged; sometimes not.

If you experience either one, check your phpinfo, and make sure opcache.fast_shutdown is disabled.

I'm sorry to say, I don't have a precise answer for you. But I did encounter this problem as well, and, at least in my case, found a way around it. So maybe it'll offer some clues to someone else who knows more about Php under the hood.

The scenario is, I have an array passed to a function. The content of this array is being used to produce an HTML string to be sent back to the browser, by placing it all inside a global variable that's later printed. (This function isn't actually returning anything. Sloppy, I know, but that's beside the point.) Inside this array, among other things, are a couple of elements carrying, by reference, nested associative arrays that were defined outside of this function. By process-of-elimination, I found that manipulation of any element inside this array within this function, referenced or not, including an attempt to unset those referenced elements, results in Chrome throwing a net::ERR_INCOMPLETE_CHUNKED_ENCODING error and displaying no content. This is despite the fact that the HTML string in the global variable is exactly what it should be.

Only by re-tooling the script to not apply references to the array elements in the first place did things start working normally again. I suspect this is actually a Php bug having something to do with the presence of the referenced elements throwing off the content-length headers, but I really don't know enough about this to say for sure.

I had this problem with a site in Chrome and Firefox. If I turned off the Avast Web Shield it went away. I seem to have managed to get it to work with the Web Shield running by adding some of the html5 boilerplate htaccess to my htaccess file:

# ------------------------------------------------------------------------------
# | Expires headers (for better cache control)                                 |
# ------------------------------------------------------------------------------

# The following expires headers are set pretty far in the future. If you don't
# control versioning with filename-based cache busting, consider lowering the
# cache time for resources like CSS and JS to something like 1 week.

<IfModule mod_expires.c>

    ExpiresActive on
    ExpiresDefault                                      "access plus 1 month"

  # CSS
    ExpiresByType text/css                              "access plus 1 week"

  # Data interchange
    ExpiresByType application/json                      "access plus 0 seconds"
    ExpiresByType application/xml                       "access plus 0 seconds"
    ExpiresByType text/xml                              "access plus 0 seconds"

  # Favicon (cannot be renamed!)
    ExpiresByType image/x-icon                          "access plus 1 week"

  # HTML components (HTCs)
    ExpiresByType text/x-component                      "access plus 1 month"

  # HTML
    ExpiresByType text/html                             "access plus 0 seconds"

  # JavaScript
    ExpiresByType application/javascript                "access plus 1 week"

  # Manifest files
    ExpiresByType application/x-web-app-manifest+json   "access plus 0 seconds"
    ExpiresByType text/cache-manifest                   "access plus 0 seconds"

  # Media
    ExpiresByType audio/ogg                             "access plus 1 month"
    ExpiresByType image/gif                             "access plus 1 month"
    ExpiresByType image/jpeg                            "access plus 1 month"
    ExpiresByType image/png                             "access plus 1 month"
    ExpiresByType video/mp4                             "access plus 1 month"
    ExpiresByType video/ogg                             "access plus 1 month"
    ExpiresByType video/webm                            "access plus 1 month"

  # Web feeds
    ExpiresByType application/atom+xml                  "access plus 1 hour"
    ExpiresByType application/rss+xml                   "access plus 1 hour"

  # Web fonts
    ExpiresByType application/font-woff                 "access plus 1 month"
    ExpiresByType application/vnd.ms-fontobject         "access plus 1 month"
    ExpiresByType application/x-font-ttf                "access plus 1 month"
    ExpiresByType font/opentype                         "access plus 1 month"
    ExpiresByType image/svg+xml                         "access plus 1 month"

</IfModule>

# ------------------------------------------------------------------------------
# | Compression                                                                |
# ------------------------------------------------------------------------------

<IfModule mod_deflate.c>

    # Force compression for mangled headers.
    # http://developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping
    <IfModule mod_setenvif.c>
        <IfModule mod_headers.c>
            SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding
            RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding
        </IfModule>
    </IfModule>

    # Compress all output labeled with one of the following MIME-types
    # (for Apache versions below 2.3.7, you don't need to enable `mod_filter`
    #  and can remove the `<IfModule mod_filter.c>` and `</IfModule>` lines
    #  as `AddOutputFilterByType` is still in the core directives).
    <IfModule mod_filter.c>
        AddOutputFilterByType DEFLATE application/atom+xml \
                                      application/javascript \
                                      application/json \
                                      application/rss+xml \
                                      application/vnd.ms-fontobject \
                                      application/x-font-ttf \
                                      application/x-web-app-manifest+json \
                                      application/xhtml+xml \
                                      application/xml \
                                      font/opentype \
                                      image/svg+xml \
                                      image/x-icon \
                                      text/css \
                                      text/html \
                                      text/plain \
                                      text/x-component \
                                      text/xml
    </IfModule>

</IfModule>

# ------------------------------------------------------------------------------
# | Persistent connections                                                     |
# ------------------------------------------------------------------------------

# Allow multiple requests to be sent over the same TCP connection:
# http://httpd.apache.org/docs/current/en/mod/core.html#keepalive.

# Enable if you serve a lot of static content but, be aware of the
# possible disadvantages!

 <IfModule mod_headers.c>
    Header set Connection Keep-Alive
 </IfModule>

I just wanted to share my experience with you if someone might has the same problem with MOODLE.

Our moodle platform was suddenly very slowly, the dashboard took about 2-3 times longer to load (up to 6 seconds) then usual and from time to time some pages didn't get loaded at all (not a 404 error but a blank page). In the Developer Tools Console the following error was visible: net::ERR_INCOMPLETE_CHUNKED_ENCODING.

Searching for this error, it looks like Chrome is the issue, but we had the problem with various browsers. After hours of research and comparing the databases from the days before I finally found the problem, someone turned the Event Monitoring on. However, in the "Config changes" log, this change wasn't visible! Turning Event Monitoring off, finally solved the problem - we had no rules defined for event monitoring.

We're running Moodle 3.1.2+ with MariaDB and PHP 5.4.

My fix is:

<?php  ob_start(); ?>
<!DOCTYPE html>
<html lang="de">
.....
....//your whole code
....
</html>
<?php
        ob_clean();
ob_end_flush();

ob_flush();

?>

Hope this will help someone in future, and in my case its a Kaspersky issue but the fix above works great :)

In my case it was happening during json serialization of a web api return payload - I had a 'circular' reference in my Entity Framework model, I was returning a simple one-to-many object graph back but the child had a reference back to the parent, which apparently the json serializer doensn't like. Removing the property on the child that was referencing the parent did the trick.

Hope this helps someone who might have a similar issue.

When i faced this error( while making AJAX call from javascript ); the reason was the response from controller was erroneous; it was returning a JSON which was not of valid format.

Well. Not long ago I also met this question. And finally I get the solutions which really address this issue.

My problem symptoms are also the pages not loading and find the json data was be randomly truncated.

Here are the solutions which I summary could help to solve this problem

1.Kill the anti-virus software process
2.Close chrome's Prerendering Instant pages feature
3.Try to close all the apps in your browser
4.Try to define your Content-Length header
  <?php
     header('Content-length: ' . strlen($output));
  ?>
5.Check your nginx fastcgi buffer is right 
6.Check your nginx gzip is open

If there are any loop or item which is not existing then you face this issue.

When running the App on Chrome, the page is blank and become unresponsive.

Scenario Start:

Dev Environment: MAC, STS 3.7.3, tc Pivotal Server 3.1, Spring MVC Web,

in ${myObj.getfName()}

Scenario End:

Reason of issue: getfName() function is not defined on the myObj.

Hope it help you.

my guess is the server is not correctly handling the chunked transfer-encoding. It needs to terminal a chunked files with a terminal chunk to indicate the entire file has been transferred.So the code below maybe work:

echo "\n";
flush();
ob_flush();
exit(0);

In my case it was broken config for mysqlnd_ms php extension on the server. Funny thing is that it was working fine on requests with short duration. There was a warning in server error log, so we have fixed it quick.

This seems like a common problem with multiple causes and solutions, so I'm going to put my answer here for anyone who may require it.

I was getting net::ERR_INCOMPLETE_CHUNKED_ENCODING on Chrome, osx, php70, httpd24 combination, but the same code ran fine on the production server.

I initially tailed the regular logs but nothing really showed up. A quick ls -later showed system.log was the latest touched file in /var/log, and tailing that gave me

Saved crash report for httpd[99969] version 2.4.16 (805) 
to /Library/Logs/DiagnosticReports/httpd.crash

Contained within:

Process:               httpd [99974]
Path:                  /usr/sbin/httpd
Identifier:            httpd
Version:               2.4.16 (805)
Code Type:             X86-64 (Native)
Parent Process:        httpd [99245]
Responsible:           httpd [99974]
User ID:               70

PlugIn Path:             /usr/local/opt/php70-mongodb/mongodb.so
PlugIn Identifier:       mongodb.so

A brew uninstall php70-mongodb and a httpd -k restart later and everything was smooth sailing.

In my case it was issue of html. There was '\n' in json response causing the issue. So I removed that.

Fascinating to see how many different causes there are for this issue!

Many say it's a Chrome issue, so I tried Safari and still had issues. Then tried all solutions in this thread, including turning off my AVG Realtime Protection, no luck.

For me, the issue was my .htaccess file. All it contained was FallbackResource index.php, but when I renamed it to htaccess.txt, my issue was resolved.

bhu1st

I was getting net::ERR_INCOMPLETE_CHUNKED_ENCODING, upon closer inspection of the server error longs I found it was due to PHP script execution timeout.

Adding this line on top of PHP script solved it for me:

ini_set('max_execution_time', 300); //300 seconds = 5 minutes

Ref: Fatal error: Maximum execution time of 30 seconds exceeded

Hmmm I just stumbled upon a similar issue but with different reasons behind...

I'm using Laravel Valet on a vanilla PHP project with Laravel Mix. When I opened the site in Chrome, it was throwing net::ERR_INCOMPLETE_CHUNKED_ENCODING errors. (If I had the site loaded on HTTPS protocol, the error changed to net::ERR_SPDY_PROTOCOL_ERROR.)

I checked the php.ini and opcache was not enabled. I found that in my case the problem was related to versioning the asset files - for some reason, it did not seem to like a query string in the URL of the assets (well, oddly enough, just one in particular?).

I have removed mix.version() for the local environment, and the site loads just fine in my Chrome on both HTTP and HTTPS protocols.

In the context of a Controller in Drupal 8 (Symfony Framework) this solution worked for me:

$response = new Response($form_markup, 200, array(
  'Cache-Control' => 'no-cache',
));

$content = $response->getContent();
$contentLength = strlen($content);
$response->headers->set('Content-Length', $contentLength);

return $response;

Otherwise the response header 'Transfer-Encoding' got a value 'chunked'. This may be a problem for Chrome browser.

I had this problem (showing ERR_INCOMPLETE_CHUNKED_ENCODING in Chrome, nothing in other browsers). Turned out the problem was my hosting provider GoDaddy adding a monitoring script at the end of my output.

https://www.godaddy.com/community/cPanel-Hosting/how-to-remove-additional-quot-monitoring-quot-script-added/td-p/62592

Mehdi Zamani

Check the nginx folder permission and set appache permission for that:

chown -R www-data:www-data /var/lib/nginx

The easiest solution is to increase the proxy_read_timeout for your set proxy location to a higher value (let say 120s) in your nginx.conf.

location / {
....
proxy_read_timeout 120s
....
}

I found this solution here https://rijulaggarwal.wordpress.com/2018/01/10/atmosphere-long-polling-on-nginx-chunked-encoding-error/

This generally raises when the client sends a burst of requests to the server, next to a client side event.

This is generally a sign of "bad" programming in client side.

Imagine I am updating all lines of a table.

The bad way is to send a request to update each row (many requests in rafale without waiting for request complete). To correct it , be sure, the request is complete, before sending another one.

The good way would be to send a request with all updated rows. (one request)

So, at first, look at what is happening client side and refactoring code if necessary.

Use wireshark to identify what goes wrong in requests.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!