Serving Python (Flask) REST API over HTTP2

后端 未结 2 1341
迷失自我
迷失自我 2021-02-04 01:32

I have a Python REST service and I want to serve it using HTTP2. My current server setup is nginx -> Gunicorn. In other words, nginx (port 443 and 80 that redire

相关标签:
2条回答
  • 2021-02-04 01:51

    It is now possible to serve HTTP/2 directly from a Python app, for example using Twisted. You asked specifically about a Flask app though, in which case I'd (with bias) recommend Quart which is the Flask API reimplemented on top of asyncio (with HTTP/2 support).

    Your actual issue,

    With HTTP1.0 and Python Requests as the client, each request takes ~80ms

    suggests to me that the problem you may be experiencing is that each request opens a new connection. This could be alleviated via the use of a connection pool without requiring HTTP/2.

    0 讨论(0)
  • 2021-02-04 02:10

    Is it possible to serve a Python (Flask) application with HTTP/2?

    Yes, by the information you provide, you are doing it just fine.

    In my case (one reverse proxy server and one serving the actual API), which server has to support HTTP2?

    Now I'm going to tread on thin ice and give opinions.

    The way HTTP/2 has been deployed so far is by having an edge server that talks HTTP/2 (like ShimmerCat or NginX). That server terminates TLS and HTTP/2, and from there on uses HTTP/1, HTTP/1.1 or FastCGI to talk to the inner application.

    Can, at least theoretically, an edge server talk HTTP/2 to web application? Yes, but HTTP/2 is complex and for inner applications, it doesn't pay off very well.

    That's because most web application frameworks are built for handling requests for content, and that's done well enough with HTTP/1 or FastCGI. Although there are exceptions, web applications have little use for the subtleties of HTTP/2: multiplexing, prioritization, all the myriad of security precautions, and so on.

    The resulting separation of concerns is in my opinion a good thing.


    Your 80 ms response time may have little to do with the HTTP protocol you are using, but if those 80 ms are mostly spent waiting for input/output, then of course running things in parallel is a good thing.

    Gunicorn will use a thread or a process to handle each request (unless you have gone the extra-mile to configure the greenlets backend), so consider if letting Gunicorn spawn thousands of tasks is viable in your case.

    If the content of your requests allow it, maybe you can create temporary files and serve them with an HTTP/2 edge server.

    0 讨论(0)
提交回复
热议问题