2GB limit on file size when using fwrite in C?

前端 未结 2 2017
一生所求
一生所求 2020-12-01 14:16

I have a short C program that writes into a file until there is no more space on disk:

#include 

int main(void) {
  char c[] = \"abcdefghij\"         


        
相关标签:
2条回答
  • 2020-12-01 14:42

    it stops when the file reaches 2GB.

    Is there an internal limitation, due to the FILE structure, or something?

    This is due to the libc (the standard C library), which by default on a x86 (IA-32) Linux system is 32-bit functions provided by glibc (GNU's C Library). So by default the file stream size is based upon 32-bits -- 2^(32-1).

    For using Large File Support, see the web page.

    #define _FILE_OFFSET_BITS  64
    /* or more commonly add -D_FILE_OFFSET_BITS=64 to CFLAGS */
    
    #include <stdio.h>
    
    int main(void) {
      char c[] = "abcdefghij";
      size_t rez;
      FILE *f = fopen("filldisk.dat", "wb");
      while (1) {
        rez = fwrite(c, 1, sizeof(c), f);
        if ( rez < sizeof(c) ) { break; }
      }
      fclose(f);
      return 0;
    }
    

    Note: Most systems expect fopen (and off_t) to be based on 2^31 file size limit. Replacing them with off64_t and fopen64 makes this explicit, and depending on usage might be best way to go. but is not recommended in general as they are non-standard.

    0 讨论(0)
  • 2020-12-01 14:56

    On a 32 bits system (i.e. the OS is 32 bits), by default, fopen and co are limited to 32 bits size/offset/etc... You need to enable the large file support, or use the *64 bits option:

    http://www.gnu.org/software/libc/manual/html_node/Opening-Streams.html#index-fopen64-931

    Then your fs needs to support this, but except fat and other primitive fs, all of them support creating files > 2 gb.

    0 讨论(0)
提交回复
热议问题