DatagramSocket temporarily stops receiving packets (Java)

有些话、适合烂在心里 提交于 2019-12-13 16:29:29

问题


I have programmed a plugin in Lua for a game that sends player information via a UDP packet (512 bytes) to a remote server that reads the data from the packet and aggregates all player information into an xml file (which can then be viewed on the web by all players so they can see eachother's current state).

I have programmed the server in Java using a DatagramSocket to handle the incoming packets, however I noticed some strange behavior. After a certain period of time, the DatagramSocket appears to temporarily stop accepting connections for about 10-12 seconds, then resumes normal behavior again (no exceptions are thrown that I can see). There is definitely a relationship between how often packets are sent by the clients and how quickly this behavior occurs. If I increase the update frequency of the clients, the DatagramSocket will "fail" sooner.

It may be worth mentioning, but each packet received spawns a thread which handles the data in the packet. I am running the server on linux if it makes a difference!

Does anyone know what could be causing this sort of behavior to occur?

Andrew


回答1:


UDP is a network protocol with absolutely no delivery guarantee. Any network component anywhere along the way (including the client and server PC itself) can decide drop the packets for any reason, such as high load or network congestion.

This means you'll have to spelunk a bit to find out where the packet loss is happening. You can use something like wireshark to see whether packets are arriving at the server at all.

If reliable delivery is more important than lower latency, switch to TCP. If you stick to UDP you'll have to allow for packets getting lost, regardless of whether you fix this particular issue at this particular time.




回答2:


My conjecture would be that you're running out of receive buffer space on the server end.

You might want to revisit your design: spawning a thread is a pretty expensive operation. Doing so for every incoming packet would lead to a system with relatively low throughput, which could easily explain why the receive queue is building up.

Also, see Specifying UDP receive buffer size at runtime in Linux

P.S. I am sure you already know that UDP does not guarantee message delivery, so I won't labour the point.




回答3:


Starting a thread for each UDP packet is a Bad IdeaTM. UDP servers are traditionally coded as simple receive-loops (after all you only need one socket). This way you avoid all the overhead of threads, synchronization, and what not.



来源:https://stackoverflow.com/questions/8314174/datagramsocket-temporarily-stops-receiving-packets-java

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!