问题
I\'m trying to weigh the pros and cons of setting the Content-Length
HTTP header versus using chunked encoding to return [possibly] large files from my server. One or the other is needed to be compliant with HTTP 1.1 specs using persistent connections. I see the advantage of the Content-Length
header being :
- Download dialogs can show accurate progress bar
- Client knows upfront if the file may/may not be too large for them to ingest
The downside is having to calculate the size before you return the object which isn\'t always practical and could add to server/database utilization. The downside of chunked encoding is the small overhead of adding the chunk size before each chunk and the download progress bar. Any thoughts? Any other HTTP considerations for both methods that I may not have thought of?
回答1:
Use Content-Length, definitely. The server utilization from this will be almost nonexistent and the benefit to your users will be large.
For dynamic content, it's also quite simple to add compressed response support (gzip). That requires output buffering, which in turn gives you the content length. (not practical with file downloads or already compressed content (sound,images)).
Consider also adding support for partial content/byte-range serving - that is, capability to restart downloads. See here for a byte-range example (the example is in PHP, but is applicable in any language). You need Content-Length when serving partial content.
Of course, those are not silver bullets: for streaming media, it's pointless to use output buffering or response size; for large files, output buffering doesn't make sense, but Content-Length and byte serving makes a lot of sense (restarting a failed download is possible).
Personally, I serve Content-Length whenever I know it; for file download, checking the filesize is insignificant in terms of resources. Result: user has a determinate progress bar (and dynamic pages download faster thanks to gzip).
回答2:
If the content length is known beforehand, then I would certainly prefer it above sending in chunks. If there's means of static files at the local disk file system or in a database, then any self-respected programming language and RDBMS provides ways to get the content length beforehand. You should make use of it.
On the other hand, if the content length is really unpredictable beforehand (e.g. when your intent is to zip several files together and send it as one), then sending it in chunks may be faster than buffering it in server's memory or writing to local disk file system first. But this indeed impacts the user experience negatively because the download progress is unknown. The impatient may then abort the download and move along.
Another benefit of knowing the content length beforehand is the ability to resume downloads. I see in your post history that your main programming language is Java; you can find here an article with more technical background information and a Java Servlet example which does that.
回答3:
Content-Length
The Content-Length
header determines the byte length of the request/response body. If you neglect to specify the Content-Length
header, HTTP servers will implicitly add a Transfer-Encoding: chunked
header. The Content-Length
and Transfer-Encoding
header should not be used together. The receiver will have no idea what the length of the body is and cannot estimate the download completion time. If you do add a Content-Length
header, make sure it matches the entire body in bytes, if it is incorrect, the behaviour of receivers is undefined.
The Content-Length
header will not allow streaming, but it is useful for large binary files, where you want to support partial content serving. This basically means resumable downloads, paused downloads, partial downloads, and multi-homed downloads. This requires the use of an additional header called Range
. This technique is called Byte serving.
Transfer-Encoding
The use of Transfer-Encoding: chunked
is what allows streaming within a single request or response. This means that the data is transmitted in a chunked manner, and does not impact the representation of the content.
Officially an HTTP client is meant to send a request with a TE
header field that specifies what kinds of transfer encodings the client is willing to accept. This is not always sent, however most servers assume that clients can process chunked
encodings.
The chunked
transfer encoding makes better use of persistent TCP connections, which HTTP 1.1 assumes to be true by default.
Content-Encoding
It is also possible to compress chunked or non-chunked data. This is practically done via the Content-Encoding
header.
Note that the Content-Length
is equal to the length of the body after the Content-Encoding
. This means if you have gzipped your response, then the length calculation happens after compression. You will need to be able to load the entire body in memory if you want to calculate the length (unless you have that information elsewhere).
When streaming using chunked encoding, the compression algorithm must also support online processing. Thankfully, gzip supports stream compression. I believe that the content gets compressed first, and then cut up in chunks. That way, the chunks are received, then decompressed to acquire the real content. If it were the other way around, you'll get the compressed stream, and then decompressing would give us chunks. Which doesn't make sense.
A typical compressed stream response may have these headers:
Content-Type: text/html
Content-Encoding: gzip
Transfer-Encoding: chunked
Semantically the usage of Content-Encoding
indicates an "end to end" encoding scheme, which means only the final client or final server is supposed to decode the content. Proxies in the middle are not suppose to decode the content.
If you want to allow proxies in the middle to decode the content, the correct header to use is in fact the Transfer-Encoding
header. If the HTTP request possessed a TE: gzip chunked
header, then it is legal to respond with Transfer-Encoding: gzip chunked
.
However this is very rarely supported. So you should only use Content-Encoding
for your compression right now.
Chunked vs Store & Forward
来源:https://stackoverflow.com/questions/2419281/content-length-header-versus-chunked-encoding