Why are particular UDP messages always getting dropped below a particular buffer size?

前端 未结 2 1416
悲&欢浪女
悲&欢浪女 2021-01-04 04:10

3 different messages are being sent to the same port at different rates:

Message  size (bytes) &nb

相关标签:
2条回答
  • 2021-01-04 04:37

    Without detailed analysis of every network stack implementation along the path your UDP messages are being sent it is nearly impossible to state the resulting behaviour.

    UDP implementations are allowed to drop any packet at their own discretion. Usually this happens when a stack comes to the conclusion that it would need to drop packets to be able to receive new ones. There is no formal requirement that the packets dropped are the oldest or the newest being received. It could also be that a certain size class is more affected due to internal memory management strategies.

    From the IP stacks involved the most interesting one is the one on the receiving machine.

    For sure you will get better receive experience if you change the receive side to have a receive buffer that will take several seconds full of expected messages. I'd start with at least 10k.

    The observed "change" in behaviour when going from 4,799 to 4,800 may result from the later just allowing one of the small messages to be received before it needs to be dropped again, while the smaller size just causes it to be dropped slightly earlier. If the receiving application is quick enough to read the pending message you will receive small messages in the one case and no small messages in the other case.

    0 讨论(0)
  • 2021-01-04 04:43

    The basic answer to the question of why a SO_RCVBUF size of 4799 results in lost low speed messages and a size of 4800 works fine is that with the mixture of the UDP packets coming in, the rate at which they are coming in, the rate at which you are processing incoming packets, and the sizing of the mbuff and cluster numbers in your vxWorks kernel allow for sufficient network stack throughput that the low speed messages are not being discarded with the larger size.

    The SO_SNDBUF option description in the setsockopt() man page at URL http://www.vxdev.com/docs/vx55man/vxworks/ref/sockLib.html#setsockopt mentioned in a comment above has this to say about the size specified and the effect on mbuff usage:

    The effect of setting the maximum size of buffers (for both SO_SNDBUF and SO_RCVBUF, described below) is not actually to allocate the mbufs from the mbuf pool. Instead, the effect is to set the high-water mark in the protocol data structure, which is used later to limit the amount of mbuf allocation.

    UDP packets are discrete units. If you send 10 packets of size 232 that is not considered to be 2320 bytes of data in contiguous memory. Instead that is 10 memory buffers within the network stack because UDP is discrete packets while TCP is a continuous stream of bytes.

    See How do I tune the network buffering in VxWorks 5.4? in the DDS community web site which gives a discussion about the interdependence of the mixture of mbuff sizes and network clusters.

    See How do I resolve a problem with VxWorks buffers? in the DDS community web site.

    See this PDF of a slide presentation, A New Tool to study Network Stack Exhaustion in VxWorks from 2004 which discusses using various tools such as mBufShow and inetStatShow to see what is happening in the network stack.

    0 讨论(0)
提交回复
热议问题