Our web application (ASP.NET Web Forms) has a page that will display a recently generated PDF file to users. Because the PDF file is sometimes quite large, we\'ve implemented a
From an article on MSDN it seems that you can disable chunked encoding:
appcmd set config /section:asp /enableChunkedEncoding:False
But it's mentioned under ASP settings, so it may not apply to a response generated from an ASP.NET handler.
I had a similar problem when I was writing a large CSV (The file didn't exist I write a string line by line by iterating through an in memory collection and generating the line) by calling Response.Write
on the Response stream with BufferOutput set to false, but the solution was to change
Reponse.ContentType = 'text/csv'
to Reponse.ContentType = 'application/octet-stream'
When the content type wasn't set to application/octet-stream a bunch of other response headers were added such as Content-Encoding - gzip
Once Response.Flush()
has been called, the response body in in the process of being sent to the client, so no additional headers can be added to the response. I find it very unlikely that a second call to Response.Flush()
is adding the Transfer-Encoding
header at that time.
You say you have compression enabled. That almost always requires a chunked response. So it would make sense that if the server knows the Content-Length
prior to compression, it might substitute that header for the Transfer-Encoding
header and chunk the response. However, even with compression enabled on the server, the client has to explicitally state support for compression in its Accept-Encoding
request header or else the server cannot compress the response. Did you check for that in your tests?
On a final note, since you are calling Response.Flush()
manually, try setting Response.Buffer = True
and Response.BufferOutput = False
. Apparently they have conflicting effects on how Response.Flush()
operates. See the comment at the bottom of this page and this page.