I have a custom UDP protocol with multiple senders/receivers designed to send large files around as fast as possible. It is client/server based.
How can I detect conge
I had the following idea:
Alternatively, a more advanced approach:
Disclaimer: im not network expert, this might not work for you.
Flow control is an inherently difficult problem because all you really know is when you sent a packet and when you received a packet. Things like latency, loss, and even speed are all statistics that you have to calculate and interpret.
The following article discusses these statistics and their meaning in depth: DEI Tech Note 0021: Loss, Latency, and Speed
Finding a good solution has been the subject of much research and much commercial endeavor. Different algorithms (TCP, UDT, Multipurpose Transaction Protocol, etc.) use different methods and make different assumptions, all trying to figure out what is going on in the network based on the very sparse data available.
This is assuming you have to use UDP (TCP would be preferred).
From within the application, the only indication of network congestion is the loss of IP packets. Depending on how you have your protocol, you may want to do something like number each datagram going out, and if a receiver sees that it is missing some (or getting them out of order), send a message (or multiple) to the sender to indicate that there was loss of IP packets and to slow down.
There is a protocol called RTP (Real-time Transport Protocol) that is used in real time streaming applications.
RTP runs over UDP and RTCP(Real-time Transport Control Protocol) working with RTP provides measures for QoS(Quality of Service) like packet loss, delay, jitter, etc to report back to the sender so it knows when to slow down or change codecs.
Not saying you can use RTP, but it may be helpful to look at to see how it works.
Latency is a good way to detect congestion. If your latency starts going up, then you should probably slow down. A lost packet is the equivalent to latency = infinity. But you can never be sure if a packet was lost or is just very slow, so you should have a timeout to "detect" lost packets.
It seems AIMD algorithm is what they use in TCP and UDT protocols to avoid congestion.
From the Wikipedia page:
The additive-increase/multiplicative-decrease (AIMD) algorithm is a feedback control algorithm best known for its use in TCP Congestion Avoidance. AIMD combines linear growth of the congestion window with an exponential reduction when a congestion takes place. Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link.