I have a problem about select function when I worked on a Linux socket program. The select function worked fine as the man page says if the client connected the server side
You have the right answer already - re-init the fd_set
s before each call to select(2)
.
I would like to point you to a better alternative - Linux provides epoll(4) facility. While it's not standard, it's much more convenient since you need to setup the events you wait for only once. The kernel manages the file descriptor event tables for you, so it's much more efficient. epoll
also provides edge-triggered functionality, where only a change in state on a descriptor is signaled.
For completeness - BSDs provide kqueue(2), Solaris has /dev/poll.
One more thing: your code has a well known race condition between a client and the server. Take a look at Stevens UnP: Nonblocking accept.
I've got the same trouble in my similar codes. I followed the suggestion of doing initialization each time before calling select() and it works. In codes at this case, just bringing the two lines into loop will make it work.
FD_ZERO(&read_set);
FD_SET(servSock, &read_set);
The same effect seems to happen if you don't reset the timeval struct before each call to select.
The 'select()' function is frustrating to use; you have to set up its arguments each time before you call it because it modifies them. What you are seeing is a demonstration of what happens if you don't set up the fd_set(s) each time around the loop.
You have to fill your FD_SET at each iteration. The best way to do so is to maintain a collection of your FDs somewhere and put the one that you need for the select call in a temporary FD_SET.
If you need to handle a lot of clients, you might have to change the FD_SETSIZE (in /usr/include/sys/select.h
) macro.
Happy network programming :)