问题
Disinterested curiosity...
In Java I listen on a socket, with backlog of 1:
ServerSocket ss = new ServerSocket(4000, 1);
In shells I run
netcat localhost 4000
many times - 5 so far.
The connections are never rejected. Every instance of netcat
sits and waits until my ServerSocket is destroyed.
Backlog length is 1 - that means it should only let one incoming connection queue up, and then reject, does it not? ((I don't know if the queue includes the first one - not important right now.))
I know I can make this work by closing the ServerSocket (and then opening another one when I'm ready), but... shouldn't it work anyway?
Have I misunderstood?
回答1:
As I wrote here, quoted above,
This behaviour is platform-dependent. Windows issues an RST when the backlog fills up, which results in 'connection refused'. Unix, Linux just drop the SYN packet.
NB Backlog length isn't 1. The platform can adjust it up or down. The smallest minimum backlog length in history was five, in early BSD releases. It is now fifty or even five hundred on some platforms.
来源:https://stackoverflow.com/questions/33189782/why-arent-serversocket-connections-rejected-when-backlog-is-full