We have a system where a client makes an HTTP GET request, the system does some processing on the backend, zips the results, and sends it to the client. Since the processing can
My guess is that the zip output stream doesn't actually write anything before beeing able to compress stuff. Huffmann algorithm used for zipping requires all data to be known before actually beeing able to compress anything. It can't start before everything is known basically.
Zipping might be a win if the amount of data is big, but I don't think you can achieve asynchronous reponse while zipping data.