When we perform a fork in Unix, open file handles are inherited, and if we don\'t need to use them we should close them. However, when we use libraries, file handles may be open
Isn't this a design issue ? Is it possible for your process to fork before initializing the libs that open those files ?
I agree with what other people have said about closing random files being dangerous. You might end up filing some pretty interesting bug reports for all of your third-party tools.
That said, if you know you won't need those files to be open, you can always walk through all of the valid file descriptors (1 to 65535, IIRC) and close everything you don't recognize.
Just a link, but it seems helpful: How many open files? at netadmintools.com. It seems to use /proc investigations to learn about a process' open files, not sure if that is the only way or if there is an API. Parsing files for this type of information can be a bit ... messy. Also, /proc might be deprecated too, something to check for.
If the libraries are opening files you don't know about, how do you know they don't need them after a fork? Unexported handles are an internal library detail, if the library wants them closed it will register an atfork() handler to close them. Walking around behind some piece of code closing its file handles behind its back will lead to subtle hard to debug problems since the library will error unexpectedly when it attempts to work with a handle it knows it opened correctly, but did not close.
You can do from a shell:
lsof -P -n -p _PID_
Where PID is your process pid.
In Linux you can check /proc/<pid>/fd
directory - for every open fd there will be a file, named as handle. I'm almost sure this way is non-portable.
Alternatively you can use lsof
- available for Linux, AIX, FreeBSD and NetBSD, according to man lsof
.