I have a short C program that writes into a file until there is no more space on disk:
#include <stdio.h>
int main(void) {
char c[] = "abcdefghij";
size_t rez;
FILE *f = fopen("filldisk.dat", "wb");
while (1) {
rez = fwrite(c, 1, sizeof(c), f);
if (!rez) break;
}
fclose(f);
return 0;
}
When I run the program (in Linux), it stops when the file reaches 2GB.
Is there an internal limitation, due to the FILE structure, or something?
Thanks.
On a 32 bits system (i.e. the OS is 32 bits), by default, fopen and co are limited to 32 bits size/offset/etc... You need to enable the large file support, or use the *64 bits option:
http://www.gnu.org/software/libc/manual/html_node/Opening-Streams.html#index-fopen64-931
Then your fs needs to support this, but except fat and other primitive fs, all of them support creating files > 2 gb.
it stops when the file reaches 2GB.
Is there an internal limitation, due to the FILE structure, or something?
This is due to the libc (the standard C library), which by default on a x86 (IA-32) Linux system is 32-bit functions provided by glibc (GNU's C Library). So by default the file stream size is based upon 32-bits -- 2^(32-1).
For using Large File Support, see the web page.
#define _FILE_OFFSET_BITS 64
/* or more commonly add -D_FILE_OFFSET_BITS=64 to CFLAGS */
#include <stdio.h>
int main(void) {
char c[] = "abcdefghij";
size_t rez;
FILE *f = fopen("filldisk.dat", "wb");
while (1) {
rez = fwrite(c, 1, sizeof(c), f);
if ( rez < sizeof(c) ) { break; }
}
fclose(f);
return 0;
}
Note: Most systems expect fopen (and off_t) to be based on 2^31 file size limit. Replacing them with off64_t
and fopen64
makes this explicit, and depending on usage might be best way to go. but is not recommended in general as they are non-standard.
来源:https://stackoverflow.com/questions/730709/2gb-limit-on-file-size-when-using-fwrite-in-c