What happened to the TCP Nagle flush?

一个人想着一个人 提交于 2019-12-10 13:48:27

问题


According to this Socket FAQ article, Nagle's algorithm is one of many algorithms that can cause a bunch of data to sit in the TCP buffer and not hit the wire. The delay from the Nagle algorithm can be up to 200ms.

For some reason, Nagle's algorithm can be turned off completely, but not flushed just once. This is really puzzling to me. Why is there no way to say that "just this one time, don't wait for any more data. Just act as if Nagle's 200ms are up."

Wouldn't that make perfect sense, and strike a good balance between no Nagle at all, Nagle all the time, and implementing one's own protocol from scratch?


回答1:


Good question. I guess nobody ever really needed it or they got around it. If I remember correctly, enabling TCP_NODELAY pushes the data immediately. Then you can just disable it.

Of course, this comes at the high cost of two system calls for a "flush". What you could do: send(2), on Unix implementations has a flags argument. You could implement your own flag, something like: MSG_JUSTPUSHIT (okay, maybe another name) and consider it in tcp_output.




回答2:


In performance-sensitive applications where the delays introduced by Nagle's algorithm are an issue, it's often easier to just disable Nagle's algorithm entirely and emulate its batching in software by using scatter/gather IO (e.g, writev(), or by implementing buffering in software where needed). As an added bonus, doing this cuts out some system call overhead.

Alternatively, you can open up two separate sockets and disable Nagling on one of them. Just keep in mind that data sent on one socket won't necessarily be synced up with the other one.



来源:https://stackoverflow.com/questions/6726832/what-happened-to-the-tcp-nagle-flush

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!