boost::asio sending data faster than receiving over TCP. Or how to disable buffering

放肆的年华 提交于 2019-12-11 02:07:24

问题


I have created a client/server program, the client starts an instance of Writer class and the server starts an instance of Reader class. Writer will then write a DATA_SIZE bytes of data asynchronously to the Reader every USLEEP mili seconds.

Every successive async_write request by the Writer is done only if the "on write" handler from the previous request had been called.

The problem is, If the Writer (client) is writing more data into the socket than the Reader (server) is capable of receiving this seems to be the behaviour:

  • Writer will start writing into (I think) system buffer and even though the data had not yet been received by the Reader it will be calling the "on write" handler without an error.

  • When the buffer is full, boost::asio won't fire the "on write" handler anymore, untill the buffer gets smaller.

  • In the meanwhile, the Reader is still receiving small chunks of data.

  • The fact that the Reader keeps receiving bytes after I close the Writer program seems to prove this theory correct.

What I need to achieve is to prevent this buffering because the data need to be "real time" (as much as possible).

I'm guessing I need to use some combination of the socket options that asio offers, like the no_delay or send_buffer_size, but I'm just guessing here as I haven't had success experimenting with these.

I think that the first solution that one can think of is to use UDP instead of TCP. This will be the case as I'll need to switch to UDP for other reasons as well in the near future, but I would first like to find out how to do it with TCP just for the sake of having it straight in my head in case I'll have a similar problem some other day in the future.

NOTE1: Before I started experimenting with asynchronous operations in asio library I had implemented this same scenario using threads, locks and asio::sockets and did not experience such buffering at that time. I had to switch to the asynchronous API because asio does not seem to allow timed interruptions of synchronous calls.

NOTE2: Here is a working example that demonstrates the problem: http://pastie.org/3122025

EDIT: I've done one more test, in my NOTE1 I mentioned that when I was using asio::iosockets I did not experience this buffering. So I wanted to be sure and created this test: http://pastie.org/3125452 It turns out that the buffering is there event with asio::iosockets, so there must have been something else that caused it to go smoothly, possibly lower FPS.


回答1:


TCP/IP is definitely geared for maximizing throughput as intention of most network applications is to transfer data between hosts. In such scenarios it is expected that a transfer of N bytes will take T seconds and clearly it doesn't matter if receiver is a little slow to process data. In fact, as you noticed TCP/IP protocol implements the sliding window which allows the sender to buffer some data so that it is always ready to be sent but leaves the ultimate throttling control up to the receiver. Receiver can go full speed, pace itself or even pause transmission.

If you don't need throughput and instead want to guarantee that the data your sender is transmitting is as close to real time as possible, then what you need is to make sure the sender doesn't write the next packet until he receives an acknowledgement from the receiver that it has processed the previous data packet. So instead of blindly sending packet after packet until you are blocked, define a message structure for control messages to be sent back from the receiver back to the sender.

Obviously with this approach, your trade off is that each sent packet is closer to real-time of the sender but you are limiting how much data you can transfer while slightly increasing total bandwidth used by your protocol (i.e. additional control messages). Also keep in mind that "close to real-time" is relative because you will still face delays in the network as well as ability of the receiver to process data. So you might also take a look at the design constraints of your specific application to determine how "close" do you really need to be.

If you need to be very close, but at the same time you don't care if packets are lost because old packet data is superseded by new data, then UDP/IP might be a better alternative. However, a) if you have reliable deliver requirements, you might ends up reinventing a portion of tcp/ip's wheel and b) keep in mind that certain networks (corporate firewalls) tend to block UDP/IP while allowing TCP/IP traffic and c) even UDP/IP won't be exact real-time.



来源:https://stackoverflow.com/questions/8721002/boostasio-sending-data-faster-than-receiving-over-tcp-or-how-to-disable-buffe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!