Do we still need a connection pool for microservices talking HTTP2?

后端 未结 1 1266
心在旅途
心在旅途 2021-01-15 05:19

As HTTP2 supports multiplexing, do we need still a pool of connections for microservice communication? If yes, what are the benefits of having such a pool?

Example:

相关标签:
1条回答
  • 2021-01-15 06:01

    Yes, you still need connection pool in a client contacting a microservice.

    First, in general it's the server that controls the amount of multiplexing. A particular microservice server may decide that it cannot allow beyond a very small multiplexing.
    If a client wants to use that microservice with a higher load, it needs to be prepared to open multiple connections and this is where the connection pool comes handy. This is also useful to handle load spikes.

    Second, HTTP/2 has flow control and that may severely limit the data throughput on a single connection. If the flow control window are small (the default defined by the HTTP/2 specification is 65535 bytes, which is typically very small for microservices) then client and server will spend a considerable amount of time exchanging WINDOW_UPDATE frames to enlarge the flow control windows, and this is detrimental to throughput.
    To overcome this, you either need more connections (and again a client should be prepared for that), or you need larger flow control windows.

    Third, in case of large HTTP/2 flow control windows, you may hit TCP congestion (and this is different from socket buffer size) because the consumer is slower than the producer. It may be a slow server for a client upload (REST request with a large payload), or a slow client for a server download (REST response with a large payload).
    Again to overcome TCP congestion the solution is to open multiple connections.

    Comparing HTTP/1.1 with HTTP/2 for the microservice use case, it's typical that the HTTP/1.1 connection pools are way larger (e.g. 10x-50x) than HTTP/2 connection pools, but you still want connection pools in HTTP/2 for the reasons above.

    [Disclaimer I'm the HTTP/2 implementer in Jetty].
    We had an initial implementation where the Jetty HttpClient was using the HTTP/2 transport with an hardcoded single connection per domain because that's what HTTP/2 preached for browsers.
    When exposed to real world use cases - especially microservices - we quickly realized how bad of an idea that was, and switched back to use connection pooling for HTTP/2 (like HttpClient always did for HTTP/1.1).

    0 讨论(0)
提交回复
热议问题