问题
If for some reason, I discover a fatal situation in my program, and I would like to exit with an error code. Sometimes, the context of the fatal error is outside the scope of other file-descriptors. is it a good practice to close these file descriptors. As far as I know, these files are automatically closed when the process dies.
回答1:
Files are automatically closed, but it's a good practice.
See valgrind on this example
david@debian:~$ cat demo.c
#include <stdio.h>
int main(void)
{
FILE *f;
f = fopen("demo.c", "r");
return 0;
}
david@debian:~$ valgrind ./demo
==3959== Memcheck, a memory error detector
==3959== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==3959== Using Valgrind-3.6.0.SVN-Debian and LibVEX; rerun with -h for copyright info
==3959== Command: ./demo
==3959==
==3959==
==3959== HEAP SUMMARY:
==3959== in use at exit: 568 bytes in 1 blocks
==3959== total heap usage: 1 allocs, 0 frees, 568 bytes allocated
==3959==
==3959== LEAK SUMMARY:
==3959== definitely lost: 0 bytes in 0 blocks
==3959== indirectly lost: 0 bytes in 0 blocks
==3959== possibly lost: 0 bytes in 0 blocks
==3959== still reachable: 568 bytes in 1 blocks
==3959== suppressed: 0 bytes in 0 blocks
==3959== Rerun with --leak-check=full to see details of leaked memory
==3959==
==3959== For counts of detected and suppressed errors, rerun with: -v
==3959== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 4 from 4)
As you can see, it raises a memory leak
On some circumstances you can make use of atexit()
:
#include <stdio.h>
#include <stdlib.h>
static FILE *f;
static void free_all(void)
{
fclose(f);
}
static int check(void)
{
return 0;
}
int main(void)
{
atexit(free_all);
f = fopen("demo.c", "r");
if (!check()) exit(EXIT_FAILURE);
/* more code */
return 0;
}
回答2:
The classic guide to POSIX programming "Advanced programming in UNIX environment" states:
When a process terminates, all of its open files are closed automatically by the kernel. Many programs take advantage of this fact and don't explicitly close open files.
You did not mention the OS in your question but such behavior should be expected from any OS. Whenever your program control flow crosses exit()
or return
from main()
it is the system responsibility to clean up after the process.
There is always a danger of bugs in OS implementation. But, on the other hand, system has to deallocate way more than several open file descriptors at the process termination: memory occupied by the executable file image, stack, kernel objects associated with the process. You can not control this behavior from the user space, you just rely on its working-as-intended. So, why can't a programmer rely on the automatic close of the fd
s?
So, the only problem with leaving fd
s open may be the programming style question. And, as in the case of using stdio
objects (i.e. stuff built around the system-provided file i/o), you may get (somewhat) disorienting alerts while valgrinding. As for the danger of leaking system resources, there should be nothing to worry about, unless your OS implementation is really buggy.
回答3:
As far as I know, these files are automatically closed when the process dies.
Don't rely on that. Conceptually, when the process dies, it is your responsibility to free allocated memory, close non-standard file descriptors, etc. Of course, every sane OS (and even Windows) will clean up after your process, but that's not something to expect.
回答4:
Yes. Suppose your main program is now a class in a separate program. Now you just described a resource leak. You're essentially violating encapsulation by relying on global program state, i.e. the state of the process - not your module, not a class, not an ADT, not a thread, but the whole process - being in a shutdown state.
回答5:
Every sane operating system (certainly any form of Linux, or Windows) will close the files when the program terminates. If you have a very simple program then you probably don't need to close files on termination. However closing the files explicitly is still good practice, for the following reasons:
if you leave it to the OS you have no control over the order in which the files are closed, which may lead to consistency problems (such as in a multi-file database).
if there are errors associated with closing the file (such as I/O errors, out of space errors, etc) you have no way of reporting them.
there may be interactions with file locking which need to be handled.
a routine to close all files can handle any other clean-up that the program needs at the same time (flushing buffers, for instance)
回答6:
C does guarantee that all open files will be closed if your program terminates normally (i.e. via exit
or a return from main
). However, if your program terminates abnormally, e.g. it's closed by the operating system due to using a NULL pointer, it's up to the operating system to close the files. Therefore it's a good idea to make sure files are closed once they're no longer needed in case of unexpected termination.
The other reason is resource limits. Most operating systems have limits on the number of files open (as well as many other things), and so it's good practice to return those resources as soon as they're no longer needed. If every program kept all its files open indefinitely, systems could run into problems quite quickly.
来源:https://stackoverflow.com/questions/15246833/is-it-a-good-practice-to-close-file-descriptors-on-exit