问题
There is a question already answering the particular case of variable declaration, but what about other literal constant uses?
For example:
uint64_t a;
...
int32_t b = a / 1000000000;
Is last piece of code equivalent to next one in any standard C compiler?
uint64_t a;
...
int32_t b = (int32_t)(a / UINT64_C(1000000000));
In other words, are xINTn_C macros needed at all (supposing we are using explicit casting in cases where implicit one is wrong)?
EDIT
When compiler reads 1000000000
, is it allowed to store it as int
in an internal representation (dropping all overflowing bits) or it must store it at highest possible precision (long long) until it resolves whole expression type? Is it an implementation-defined behavior or it is mandated by the standard?
回答1:
Your second example isn't valid C99 and looks like C++. Perhaps want you want is a cast, i.e. (int32_t)(a / UINT64_C(1000000000))
?
Is there a difference between a / UINT64_C(1000000000)
and a / 1000000000
? No, they'll end up with the same operation. But I don't think that's really your question.
I think your question boils down to what will the type of the integer literal "1000000000" be? Will it be an int32_t or an int64_t? The answer in C99 comes from §6.4.4.1 paragraph 5:
The type of an integer constant is the first of the corresponding list in which its value can be represented.
For decimal constants with no suffix, the list is int
, long int
, long long int
. So the first literal will almost certainly be an int
(depend on the size of an int
, which will likely be 32-bits and therefor large enough to hold one billion). The second literal with the UINT64_C macro will likely be either a unsigned long
or unsigned long long
, depending on the platform. It will be whatever type corresponds to uint64_t
.
So the types of the constants are not the same. The first will be signed while the second is unsigned. And the second will most likely have more "longs", depending on the compiler's sizes of the basic int types.
In your example, it makes no difference that the literals have different types because the /
operator will need to promote the literal to the type of a
(because a
will be of equal or greater rank than the literal in any case). Which is why I didn't think that was really your question.
For an example of why UINT64_C()
would matter, consider an expression where the result changes if the literals are promoted to a larger type. I.e., overflow will occur in the literals' native types.
int32_t a = 10;
uint64_t b = 1000000000 * a; // overflows 32-bits
uint64_t c = UINT64_C(1000000000) * a; // constant is 64-bit, no overflow
To compute c
, the compiler will need to promote a
to uint64_t
and perform a 64-bit multiplication. But to compute b
the compiler will use 32-bit multiplication since both values are 32-bits.
In the last example, one could use a cast instead of the macro:
uint64_t c = (uint_least64_t)(1000000000) * a;
That would also force the multiplication to be at least 64 bits.
Why would you ever use the macro instead of casting a literal? One possibility is because decimal literals are signed. Suppose you want a constant that isn't representable as a signed value? For example:
uint64_t x = (uint64_t)9888777666555444333; // warning, literal is too large
uint64_t y = UINT64_C(9888777666555444333); // works
uint64_t z = (uint64_t)(9888777666555444333U); // also works
Another possibility is for preprocessor expressions. A cast isn't legal syntax for use in the expression of a #if
directive. But the UINTxx_C()
macros are.
Since the macros use suffixes pasted onto literals and there is no suffix for a short, one will likely find that UINT16_C(x) and UINT32_C(x) are identical. This gives the result that (uint_least16_t)(65537) != UINT16_C(65537)
. Not what one might expect. In fact, I have a hard time seeing how this complies with C99 §7.18.4.1:
The macro UINTN_C(value) shall expand to an integer constant expression corresponding to the type uint_leastN_t.
来源:https://stackoverflow.com/questions/43193065/are-literal-suffixes-needed-in-standard-c