When we perform a fork in Unix, open file handles are inherited, and if we don\'t need to use them we should close them. However, when we use libraries, file handles may be open
As mentioned on @Louis Gerbarg's answer, the libraries are probably expecting the file handles to be kept open on fork()
(which is supposed to be, after all, an almost identical copy of the parent process).
The problem most people have is on the exec()
which often follows the fork()
. Here, the correct solution is for the library which created the handles to mark them as close-on-exec (FD_CLOEXEC
).
On libraries used by multithread programs, there is a race condition between a library creating a file handle and setting FD_CLOEXEC
on it (another thread can fork()
between both operations). To fix that problem, O_CLOEXEC was introduced in the Linux kernel.
To start with, you don't really need to care a whole lot about the open file descriptors you don't know about. If you know you're not going to write to them again, closing them is a good idea and doesn't hurt - you just did a fork() after all, the fds are open twice. But likewise, if you leave them open , they won't bother you either - after all, you don't know about them, you presumably won't be randomly writing to them.
As for what your third-party libraries will do, it's a bit of a toss-up either way. Some probably don't expect to run into a situation with a fork(), and might end up accidentally writing to the same fd from two processes without any synchronization. Others probably don't expect to have you closing their fds on them. You'll have to check. This is why it's a bad idea to randomly open a file descriptor in a library and not give it to the caller to manage.
All that said, in the spirit of answering the original question, there isn't a particularly good way. You can call dup()
or dup2()
on a file descriptor; if it's closed, the call will fail with EBADF
. So you can say:
int newfd = dup(oldfd);
if (newfd > 0)
{
close(newfd);
close(oldfd);
}
but at that point you're just as well off saying close(oldfd)
in the first place and ignoring any EBADFs.
Assuming you still want to take the nuclear option of closing everything, you then need to find the maximum number of open file descriptors possible. Assuming 1 to 65,535 is not a good idea. First of all, fds start at 0, of course, but also there's no particular upper limit defined. To be portable, POSIX's sysconf(_SC_OPEN_MAX)
should tell you, on any sane POSIX system, though strictly speaking it's optional. If you're feeling paranoid, check the return value for -1, though at that point you mostly have to fall back on a hardcoded value anyway (1024 should be fine unless you're doing something extremely weird). Or if you're fine with being Linux-specific, you can dig around in /proc.
Don't forget to not close fds 0, 1, and 2 - that can really confuse things.
Reasonable libraries will always have functions which free whatever resources (eg. file handles) they have allocated.