What is a good programming pattern for handling return values from stdio file writing functions

前端 未结 13 1196
梦如初夏
梦如初夏 2021-01-02 11:28

I\'m working on some code that generates a lot of

ignoring return value of ‘size_t fwrite(const void*, size_t, size_t, FILE*)’, declared with attribute warn         


        
相关标签:
13条回答
  • 2021-01-02 11:56

    Well ... You could create a wrapper function, that re-tries the write if it fails, perhaps up to some maximum number of retries, and returns success/failure:

    int safe_fwrite(FILE *file, const void *data, size_t nbytes, unsigned int retries);
    void print_and_exit(const char *message);
    

    Then your main code could be written as

    #define RETRIES 5
    if(!safe_fwrite(fp, &blah, sizeof blah, RETRIES))
      print_and_exit("Blah writing failed, aborting");
    if(!safe_fwrite(fp, &foo, sizeof foo, RETRIES))
      print_and_exit("Foo writing failed, aborting");
    
    0 讨论(0)
  • 2021-01-02 11:59

    A potentially elegant C solution for this could be something like this (warning - untested, uncompiled code ahead):

    
      size_t written;
      int    ok = 1;
      size_t num_elements = x;
      ok = (fwrite(stuff, sizeof(data), num_elements, outfile) == num_elements);
    
      if (ok) {
        ... do other stuff ...
      }
    
      ok = ok && (fwrite(stuff, sizeof(data), num_elements, outfile) == num_elements);
    
      if (ok) {
        ... etc etc ad nauseam ...
      }
    
      fclose(outfile);
    
      return ok;
    
    

    The above accomplishes two goals at the same time:

    • Return values are checked, thus eliminating the warning and giving you the ability to return a status code.
    • Thanks to the short circuit evaluation, should one of the fwrite() calls fail, the subsequent ones will not be executed, so at least the file write stops instead of giving you a potentially corrupt file if the error condition disappears halfway through the function and you're able to write data again

    Unfortunately the 'ugly' if (ok) blocks are necessary if you don't want to use the short-circuit evaluation everywhere. I've seen this pattern used in comparatively small functions using short circuit evaluation everywhere and I would think that it's probably best suited to that particular use.

    0 讨论(0)
  • 2021-01-02 12:00

    The poor man's C exception handling based on goto (in fact, the one and only instance of goto NOT being harmful):

    int foo() {
        FILE * fp = fopen(...);
        ....
    
        /* Note: fwrite returns the number of elements written, not bytes! */
        if (fwrite (&blah, sizeof (blah), 1, fp) != 1) goto error1;
    
        ...
    
        if (fwrite (&foo, sizeof (foo), 1, fp) != 1) goto error2;
    
        ...
    
    ok:
        /* Everything went fine */
        fclose(fp);
        return 0;
    
    error1:
        /* Error case 1 */
        fclose(fp);
        return -1;
    
    error2:
        /* Error case 2 */
        fclose(fp);
        return -2;
    }
    

    You get the idea. Restructure as you wish (single/multiple returns, single cleanup, custom error messages, etc.). From my experience this is the most common C error handling pattern out there. The crucial point is: NEVER, EVER ignore stdlib return codes, and any good reason to do so (e.g. readability) is not good enough.

    0 讨论(0)
  • 2021-01-02 12:02

    I'd do something along these lines:

    FILE * file = fopen("foo", "wb");
    if(!file) return FAILURE;
    
    // assume failure by default
    _Bool success = 0;
    
    do
    {
        if(!fwrite(&bar, sizeof(bar), 1, file))
            break;
    
        // [...]
    
        if(!fwrite(&baz, sizeof(baz), 1, file))
            break;
    
        // [...]
    
        success = 1;
    } while(0);
    
    fclose(file);
    
    return success ? SUCCESS : FAILURE;
    

    With a little C99 macro magic

    #define with(SUBJECT, FINALIZE, ...) do { \
        if(SUBJECT) do { __VA_ARGS__ } while(0); if(SUBJECT) FINALIZE; \
    } while(0)
    

    and using ferror() instead of our own error flag as suggested by Jonathan Leffler, this can be written as

    FILE * file = fopen("foo", "wb");
    with(file, fclose(file),
    {
        if(!fwrite(&bar, sizeof(bar), 1, file))
            break;
    
        // [...]
    
        if(!fwrite(&baz, sizeof(baz), 1, file))
            break;
    
        // [...]
    });
    
    return file && !ferror(file) ? SUCCESS : FAILURE;
    

    If there are other error conditions aside from io errors, you'll still have to track them with one or more error variables, though.

    Also, your check against sizeof(blah) is wrong: fwrite() returns the count of objects written!

    0 讨论(0)
  • 2021-01-02 12:03

    Ok, given that I'm looking for a c solution (no exceptions), how about:

    void safe_fwrite(data,size,count,fp) {
       if (fwrite(data,size,count,fp) != count) {
          printf("[ERROR] fwrite failed!\n");
          fclose(fp);
          exit(4);
       }
    }
    

    And then in my code I have:

    safe_fwrite (&blah, sizeof (blah), 1, fp);
    // ... more code ...
    safe_fwrite (&foo, sizeof (foo), 1, fp);
    // ... more code ...
    
    0 讨论(0)
  • 2021-01-02 12:08

    You could write a wrapper function

    void new_fwrite(a, b, c, d) {
        if (fwrite (a, b, c, b) != b) 
           throw exception;
    }
    

    and then replace all calls to fwrite with new_fwrite

    0 讨论(0)
提交回复
热议问题