fixed length data types in C/C++

后端 未结 11 969
春和景丽
春和景丽 2020-12-23 14:34

I\'ve heard that size of data types such as int may vary across platforms.

My first question is: can someone bring some example, what goes wrong, when p

相关标签:
11条回答
  • 2020-12-23 15:02

    bit flags are the trivial example. 0x10000 will cause you problems, you can't mask with it or check if a bit is set in that 17th position if everything is being truncated or smashed to fit into 16-bits.

    0 讨论(0)
  • 2020-12-23 15:04

    For the first question: Integer Overflow.

    For the second question: for example, to typedef an unsigned 32 bits integer, on a platform where int is 4 bytes, use:

     typedef unsigned int u32;
    

    On a platform where int is 2 bytes while long is 4 bytes:

    typedef unsigned long u32;
    

    In this way, you only need to modify one header file to make the types cross-platform.

    If there are some platform-specific macros, this can be achieved without modifying manually:

    #if defined(PLAT1)
    typedef unsigned int u32;
    #elif defined(PLAT2)
    typedef unsigned long u32;
    #endif
    

    If C99 stdint.h is supported, it's preferred.

    0 讨论(0)
  • 2020-12-23 15:05

    can someone bring some example, what goes wrong, when program assumes an int is 4 bytes, but on a different platform it is say 2 bytes?

    Say you've designed your program to read 100,000 inputs, and you're counting it using an unsigned int assuming a size of 32 bits (32-bit unsigned ints can count till 4,294,967,295). If you compile the code on a platform (or compiler) with 16-bit integers (16-bit unsigned ints can count only till 65,535) the value will wrap-around past 65535 due to the capacity and denote a wrong count.

    0 讨论(0)
  • 2020-12-23 15:06

    I am curious manually, how can one enforce that some type is always say 32 bits regardless of the platform??

    If you want your (modern) C++ program's compilation to fail if a given type is not the width you expect, add a static_assert somewhere. I'd add this around where the assumptions about the type's width are being made.

    static_assert(sizeof(int) == 4, "Expected int to be four chars wide but it was not.");
    

    chars on most commonly used platforms are 8 bits large, but not all platforms work this way.

    0 讨论(0)
  • 2020-12-23 15:12

    usually, the issue happens when you max out the number or when you're serializing. A less common scenario happens when someone makes an explicit size assumption.

    In the first scenario:

    int x = 32000;
    int y = 32000;
    int z = x+y;        // can cause overflow for 2 bytes, but not 4
    

    In the second scenario,

    struct header {
    int magic;
    int w;
    int h;
    };
    

    then one goes to fwrite:

    header h;
    // fill in h
    fwrite(&h, sizeof(h), 1, fp);
    
    // this is all fine and good until one freads from an architecture with a different int size
    

    In the third scenario:

    int* x = new int[100];
    char* buff = (char*)x;
    
    
    // now try to change the 3rd element of x via buff assuming int size of 2
    *((int*)(buff+2*2)) = 100;
    
    // (of course, it's easy to fix this with sizeof(int))
    

    If you're using a relatively new compiler, I would use uint8_t, int8_t, etc. in order to be assure of the type size.

    In older compilers, typedef is usually defined on a per platform basis. For example, one may do:

     #ifdef _WIN32
          typedef unsigned char uint8_t;
          typedef unsigned short uint16_t;
          // and so on...
     #endif
    

    In this way, there would be a header per platform that defines specifics of that platform.

    0 讨论(0)
提交回复
热议问题