I\'ve heard that size of data types such as int
may vary across platforms.
My first question is: can someone bring some example, what goes wrong, when p
typedef
s based on #ifdef
for the specific platforms.I know people solve this issue with some typedefs, like you have variables like u8,u16,u32 - which are guaranteed to be 8bits, 16bits, 32bits, regardless of the platform
There are some platforms, which have no types of certain size (like for example TI's 28xxx, where size of char is 16 bits). In such cases, it is not possible to have an 8-bit type (unless you really want it, but that may introduce performance hit).
how is this achieved usually?
Usually with typedefs. c99 (and c++11) have these typedefs in a header. So, just use them.
can someone bring some example, what goes wrong, when program assumes an int is 4 bytes, but on a different platform it is say 2 bytes?
The best example is a communication between systems with different type size. Sending array of ints from one to another platform, where sizeof(int) is different on two, one has to take extreme care.
Also, saving array of ints in a binary file on 32-bit platform, and reinterpreting it on a 64-bit platform.
First of all: Never write programs that rely on the width of types like short
, int
, unsigned int
,....
Basically: "never rely on the width, if it isn't guaranteed by the standard".
If you want to be truly platform independent and store e.g. the value 33000 as a signed integer, you can't just assume that an int
will hold it. An int
has at least the range -32767
to 32767
or -32768
to 32767
(depending on ones/twos complement). That's just not enough, even though it usually is 32bits and therefore capable of storing 33000. For this value you definitively need a >16bit
type, hence you simply choose int32_t
or int64_t
. If this type doesn't exist, the compiler will tell you the error, but it won't be a silent mistake.
Second: C++11 provides a standard header for fixed width integer types. None of these are guaranteed to exist on your platform, but when they exists, they are guaranteed to be of the exact width. See this article on cppreference.com for a reference. The types are named in the format int[n]_t
and uint[n]_t
where n
is 8
, 16
, 32
or 64
. You'll need to include the header <cstdint>
. The C
header is of course <stdint.h>
.
Well, first example - something like this:
int a = 45000; // both a and b
int b = 40000; // does not fit in 2 bytes.
int c = a + b; // overflows on 16bits, but not on 32bits
If you look into cstdint
header, you will find how all fixed size types (int8_t
, uint8_t
, etc.) are defined - and only thing differs between different architectures is this header file. So, on one architecture int16_t
could be:
typedef int int16_t;
and on another:
typedef short int16_t;
Also, there are other types, which may be useful, like: int_least16_t
In earlier iterations of the C standard, you generally made your own typedef
statements to ensure you got a (for example) 16-bit type, based on #define
strings passed into the compiler for example:
gcc -DINT16_IS_LONG ...
Nowadays (C99 and above), there are specific types such as uint16_t
, the exactly 16-bit wide unsigned integer.
Provided you include stdint.h
, you get exact bit width types,at-least-that-width types, fastest types with a given minimum widthand so on, as documented in C99 7.18 Integer types <stdint.h>
. If an implementation has compatible types, they are required to provide these.
Also very useful is inttypes.h
which adds some other neat features for format conversion of these new types (printf
and scanf
format strings).
Compilers are responsible to obey the standard. When you include <cstdint>
or <stdint.h>
they shall provide types according to standard size.
Compilers know they're compiling the code for what platform, then they can generate some internal macros or magics to build the suitable type. For example, a compiler on a 32-bit machine generates __32BIT__
macro, and previously it has these lines in the stdint
header file:
#ifdef __32BIT__
typedef __int32_internal__ int32_t;
typedef __int64_internal__ int64_t;
...
#endif
and you can use it.