How can I do congestion control for a UDP protocol?

后端 未结 5 1390
深忆病人
深忆病人 2021-02-05 17:29

I have a custom UDP protocol with multiple senders/receivers designed to send large files around as fast as possible. It is client/server based.

How can I detect conge

相关标签:
5条回答
  • 2021-02-05 18:02

    I had the following idea:

    • Sender sends the data.
    • Receiver waits a couple of seconds and then calculates the throughput rate / s
    • Receiver sends the rate at which its receiving packets (bytes / s) to sender
    • Sender calculates its rate of sending packets
    • If the rate of sender is significantly higher, reduce it to match receiving rate.

    Alternatively, a more advanced approach:

    • Sender starts sending at a predefined min rate (eg. 1kb / s)
    • Receiver sends the calculated receiving rate back to sender.
    • If the receiving rate is the same as sending rate (taking latency into account) increase the rate by a set pct (eg. rate * 2)
    • Keep doing this until the sending rate becomes higher than receiving rate.
    • Keep monitoring the rates to account for changes in bandwidth increase / reduce rate if needed.

    Disclaimer: im not network expert, this might not work for you.

    0 讨论(0)
  • 2021-02-05 18:20

    Flow control is an inherently difficult problem because all you really know is when you sent a packet and when you received a packet. Things like latency, loss, and even speed are all statistics that you have to calculate and interpret.

    The following article discusses these statistics and their meaning in depth: DEI Tech Note 0021: Loss, Latency, and Speed

    Finding a good solution has been the subject of much research and much commercial endeavor. Different algorithms (TCP, UDT, Multipurpose Transaction Protocol, etc.) use different methods and make different assumptions, all trying to figure out what is going on in the network based on the very sparse data available.

    0 讨论(0)
  • 2021-02-05 18:21

    This is assuming you have to use UDP (TCP would be preferred).

    From within the application, the only indication of network congestion is the loss of IP packets. Depending on how you have your protocol, you may want to do something like number each datagram going out, and if a receiver sees that it is missing some (or getting them out of order), send a message (or multiple) to the sender to indicate that there was loss of IP packets and to slow down.

    There is a protocol called RTP (Real-time Transport Protocol) that is used in real time streaming applications.

    RTP runs over UDP and RTCP(Real-time Transport Control Protocol) working with RTP provides measures for QoS(Quality of Service) like packet loss, delay, jitter, etc to report back to the sender so it knows when to slow down or change codecs.

    Not saying you can use RTP, but it may be helpful to look at to see how it works.

    0 讨论(0)
  • 2021-02-05 18:24

    Latency is a good way to detect congestion. If your latency starts going up, then you should probably slow down. A lost packet is the equivalent to latency = infinity. But you can never be sure if a packet was lost or is just very slow, so you should have a timeout to "detect" lost packets.

    0 讨论(0)
  • 2021-02-05 18:29

    It seems AIMD algorithm is what they use in TCP and UDT protocols to avoid congestion.

    From the Wikipedia page:

    The additive-increase/multiplicative-decrease (AIMD) algorithm is a feedback control algorithm best known for its use in TCP Congestion Avoidance. AIMD combines linear growth of the congestion window with an exponential reduction when a congestion takes place. Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link.

    0 讨论(0)
提交回复
热议问题