问题
Have a look at the following piece of code:
#include <stdio.h>
int main(void)
{
int a;
a = 2147483647;
printf("a + 1 = %d \t sizeof (a + 1) = %lu\n", a + 1, sizeof (a + 1));
printf("a + 1L = %ld \t sizeof (a + 1L) = %lu\n", a + 1L, sizeof (a + 1L));
a = -1;
printf("a + 1 = %d \t sizeof (a + 1) = %lu\n", a + 1, sizeof (a + 1));
printf("a + 1L = %ld \t sizeof (a + 1) = %lu\n", a + 1L, sizeof (a + 1L)); //why a + 1L does not yield long integer ?
return 0;
}
This results in the following output:
a + 1 = -2147483648 sizeof (a + 1) = 4
a + 1L = 2147483648 sizeof (a + 1L) = 8
a + 1 = 0 sizeof (a + 1) = 4
a + 1L = 0 sizeof (a + 1) = 8
Why does a + 1L
in last line yield 0 instead of a long integer as 4294967296 ?
回答1:
why a + 1L in last line does not yield long integer as 4294967296 ?
Because converting the int
-1 to a long int
results in the long int
with value -1, and -1 + 1 = 0
.
Converting -1
to another type would only result in 4294967295
if the target type is an unsigned 32-bit type (usually, unsigned int
is such, generally, uint32_t
, if provided). But then, adding 1 to the value would wrap to 0.
Thus to obtain 4294967296
, you would need an intermediate cast,
(uint32_t)a + 1L
so that -1
is first converted to the uint32_t
with value 4294967295
, and that is then converted to long
.
回答2:
In the first case: 2147483647 is a 32 bit signed value with hex representation 0x7fffffff. Adding 1 to it gives you a 32 bit value, hex 0x80000000, which is -2147483648 in 32 bit signed int (due to overflow) and a value of 2147483648 if considered as a 64 bit signed int.
In the second case: -1 is a 32 bit signed value with hex representation 0xffffffff. Adding 1 to it gives you a 32 bit value, hex 0x00000000, which is 0 in signed int.
When you add 1 to it in 64 bit, a sign extension first happens, so you really add 0xFFFFFFFFFFFFFFFF and 0x0000000000000001, the sum is 0 as expected.
There isn't an ambiguity if you consider the sign extension.
来源:https://stackoverflow.com/questions/14748734/ambiguity-in-long-integer-arithmetic