I will go on record as saying that for almost anything except toy programs, you should use non-blocking sockets as a matter of course.
Blocking sockets cause a serious problem: if the machine on the other end (or any part of your connection to it) fails during a blocking call, your code will end up blocked until the IP stack's timeout. In a typical case, that's around 2 minutes, which is completely unacceptable for most purposes. The only way1 to abort that blocking call is to terminate the thread that made it -- but terminating a thread is itself almost always unacceptable, as it's essentially impossible to clean up after it and reclaim whatever resources it had allocated. Non-blocking sockets make it trivial to abort a call when/if needed, without doing anything to the thread that made the call.
It is possible to make blocking sockets work sort of well if you use a multi-process model instead. Here, you simply spawn an entirely new process for each connection. That process uses a blocking socket, and when/if something goes wrong, you just kill the entire process. The OS knows how to clean up the resources from a process, so cleanup isn't a problem. It still has other potential problems though: 1) you pretty much need a process monitor to kill processes when needed, and 2) spawning a process is usually quite a bit more expensive than just creating a socket. Nonetheless, this can be a viable option, especially if:
- You're dealing with a small number of connections at a time
- You generally do extensive processing for each connection
- You're dealing only with local hosts so your connection to them is fast and dependable
- You're more concerned with optimizing development than execution
1. Well, technically not the only possible way, but most of the alternatives are relatively ugly--to be more specific, I think by the time you add code to figure out that there's a problem, and then fix the problem, you've probably done more extra work than if you just use a non-blocking socket.