If you make an HTTP request to a web server, and it returns a response of type image/jpeg, how is the binary data actually encoded? Is it the original byte-level content of
Strangely, it's not "straight through".
Apart from adding the MIME header, the webserver appears to strip out all the jpeg markers (0xFF, 0xNN) but leaves the rest intact. This seems strange as I don't know how the web browser is then recognising the start of the image frame.
I found this out by writing my own simple webserver in an embedded system - I thought I'd only need to add the MIME header and send the rest of the jfif-jpeg file untouched, but the browser says "the image cannot be displayed because it contains errors"!
here's the start of the original jpeg/jfif in hex
ff d8 ff e0 00 10 4a 46 49 46 00
[SOI][APP0][length]J F I F NULL
As per the spec.
The received file contains this, after the header:
0d 0a 0d 0a 00 10 4a 46 49 46 00
The first 4 bytes are cr/lf/cr/lf at the end of the header, then NO markers, but it does contain the data field. The same thing is repeated for the other markers e.g. start of frame.
Strange huh? I don't think it's an mime encoding issue, as the rest of the data looks intact - including FFs in the data etc.
Anyone know what's going on here? PS to look closer, just request a .jpg from any website using putty or similar and save what you get, and compare it to the original, or even the saved-as version.