fastcgi multiplexing?

半腔热情 提交于 2019-12-06 06:47:20

问题


I'm in process of implementation of a fastcgi application, after reading fastCGI spec I've found a feature called "request multiplexing". It reminded me Adobe RTMP multiplexing back in the days that protocol was proprietary and closed.

As far as I understand, multiplexing allows to reduce overhead of creating new connections to FCGI clients effectively interweaving requests chunks, and at the same time enabling "keep-alive" model to connection. Latter allows sending several requests over a single connection.

First question is did I get it right?

Next one is - after some googling I've found there's no server that implements FCGI multiplexing, I was interested in "popular" servers in the first place, I mean nginx and lighttpd. I've even found some discussion about deprecation of FCGI request multiplexing.

So the question is - is there any server that supports this feature?


回答1:


I don't know if some server implement FASTCGI multiplexing (which I believe you understood correctly, but the details are in the FASTCTI protocol specifications), and I would not bother.

You will very probably use FASTCGI thru an existing FASTCGI library (e.g. Ocamlnet if you code in Ocaml, etc.). And that library would do the multiplexing, if it does it. From your point of view (of that library user) you should not really care, unless you are coding such a library yourself.

If FASTCGI multiplexing bothers you, you might use the SCGI protocol, which offers similar functionality, but is simpler, a bit less efficient, and non-multiplexing.




回答2:


Q: multiplexing allows to reduce overhead of creating new connections to FCGI clients effectively interweaving requests chunks

A: True. But keep-alive is also reducing new connections.

Q: and at the same time enabling "keep-alive" model to connection

A: Multiplexing is not required for keep-alive.

Q: Latter allows sending several requests over a single connection

A: keep-alive allows several requests after each other. Multiplexing allows several requests in parallel.

There is no widely used FastCGI capable web server supporting multiplexing. But nginx supports FastCGI keep-alive.

FastCGI multiplexing is generally a bad idea, because FastCGI doesn't support flow control. That means: If a FastCGI backend sends data, but the http client can't receive data fast enough, the web server has to save all those data, until they can be sent to the client.

When not using multiplexing, the web server could just not read data from the fastcgi backend if the http client is too slow effectively backlogging the fastcgi backend. When using multiplexing the web server needs to read all data from the fastcgi backend, even though one of the clients isn't receiving data fast enough.




回答3:


Trying to state the answers above more precisely (and correct some parts)...

multiplexing allows to reduce overhead of creating new connections to FCGI clients effectively interweaving requests chunks

In opposite to keep-alive it reduces new connections drastically, especially on high-load servers or if micro-servicing (a lot of micro requests) in usage. Futhermore it is almost required in case of balancing across network (so one cannot use unix-sockets anymore and the connection buildup process gaining more and more priority).

and at the same time enabling "keep-alive" model to connection

Although the multiplexing is not required for keep-alive, but keep-alive is almost required for multiplexing (otherwise it would make little sense).

I've found there's no server that implements FCGI multiplexing

There is few servers that supports multiplexing out of the box, but...
I saw already several modules of other devs and I have own fcgi-module for nginx (as replacement) that supports FastCGI multiplex requests. It can show the real performance increase in the practice, especially if the upstreams are connected over the network. If someone needs it, I will try to find time and make it available on github etc.

[from answer above] FastCGI multiplexing is generally a bad idea, because FastCGI doesn't support flow control. That means: If a FastCGI backend sends data, but the http client can't receive data fast enough, the web server has to save all those data, until they can be sent to the client.

This is not true. Normally the FastCGI handlers are fully asynchronous, pool of workers is separated from the delivering workers, etc. So each chunk gets a request-id, so if two or more upstream workers write to single connection simultaneously, the chunks that nginx will get are just smaller. That is the single cons. As regards the "the web server has to save all those data", it does this in any case (regardless multiplexing used or not), because otherwise one can get out-of-memory situation if too many pending data available for response. So either the backend should produce fewer data (or be thwarted) or the web-server should receive it as soon as possible and transmit it to client or save it to some interim storage (and for example nginx does this if pending data size exceeds the values configured with fastcgi_buffer_size and fastcgi_buffers directives).

[from answer above] When using multiplexing the web server needs to read all data from the fastcgi backend, even though one of the clients isn't receiving data fast enough.

Also this is false. The web-server has to read only the single chunk of response to the end, and good worker pools have "intelligent" handling, so automatically sends the chunks to the web-server as soon as possible (means if it gets available), so if multiple content-providers write to so-called "reflected" channels of the same real connection, the pending packets will be separated and chunks received from nginx as soon as the response data is available. Thereby almost only the throughput of the connection is crucial, and it does not matter at all how fast the clients will receive the data. And again, multiplexing saves vastly the time of the connection buildup, so reduces number of pending requests as well as the common request execution time (transaction rate).



来源:https://stackoverflow.com/questions/7912322/fastcgi-multiplexing

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!