HTTP/2 streams vs HTTP/1.1 connections

╄→гoц情女王★ 提交于 2019-12-13 15:55:19

问题


If we disregard the overhead of new connection creation in HTTP/1.1, is there any cases where connections perform better than HTTP/2 streams?

I conducted some performance tests for page load times and I noticed that HTTP/1.1(https) performs better than HTTP/2 for requests with large responses. Then when I start to increase the concurrency level, HTTP/2 starts to perform better. In other words, concurrency level which HTTP/2 starts to give better performance goes up with the size of the response message.

For me it is clear why HTTP/2 starts to perform better with the increase of concurrency level. But I can't figure out why requests returning larger responses need higher concurrency to show better performance than requests returning small responses.

Adding some test results.

Server: Jetty, Browser: Chrome, Latency : 100ms, Bandwidth :100 mbit

I retrieved X number of 100KB images from a web page, where X varies from 1 to 500.

Further, loading 100 number of 1MB images resulted in HTTP/2 50% slower than HTTP/1.1.


回答1:


HTTP/2 uses flow control to avoid that endpoints allocate an unbounded amount of memory.

Typically browsers send a WINDOW_UPDATE frame to enlarge their session receive flow control window (by default only 65535 octets), and therefore the server session send flow control window.

With respect to HTTP/1 flow control is an additional variable to consider when comparing HTTP/1 and HTTP/2 downloads.

The server may start writing data, exhaust its stream or session send flow control window and stop writing until the client has consumed the data and sent to the server a WINDOW_UPDATE frame.

With HTTP/2, the stream or the session may stall because of flow control, something that in HTTP/1 does not happen.

Jetty is highly configurable in this case.

First of all, you can monitor whether the session or the stream have stalled. This is exposed via JMX in the FlowControlStrategy implementation (AbstractFlowControlStrategy.get[Session|Stream]StallTime()).

If you try to perform the test with Jetty's HTTP/2 client rather than the browser, you can also tune when to send WINDOW_UPDATE frames by tuning the BufferingFlowControlStrategy.bufferRatio parameter. The closer to 0.0, the earlier the WINDOW_UPDATE frame is sent, the closer to 1.0, the later the WINDOW_UPDATE frame is sent.

The test should also report what is the network round-trip between client and server, because this affects (often dominates) the amount of time the WINDOW_UPDATE frame takes to go from client to server.

In a perfect download, you want the client to send the WINDOW_UPDATE frame early enough that by the time the WINDOW_UPDATE frame reaches the server, the server has not yet consumed the stream/session send flow control window, and so it will always have the send flow control window open and never stall.

I don't know how configurable is when the browser sends the WINDOW_UPDATE frames, however, so for large downloads this may hurt the download speed.

You want to keep an eye on how big the client reconfigures its session and stream receive flow control windows, and when it sends WINDOW_UPDATE frames.

Lastly, another parameter that may influence download speed is the TLS cipher used. It may happen that your HTTP/1 connection negotiates a much weaker cipher than that negotiated for HTTP/2 (because HTTP/2 requires only very strong ciphers), therefore rendering even a non-stalled HTTP/2 download slower than HTTP/1 just because of encryption slow-down.



来源:https://stackoverflow.com/questions/37216715/http-2-streams-vs-http-1-1-connections

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!