I have this code.
#include
int main()
{
unsigned long int i = 1U << 31;
std::cout << i << std::endl;
unsigned lon
The literal 1
with no U
is a signed int
, so when you shift << 31
, you get integer overflow, generating a negative number (under the umbrella of undefined behavior).
Assigning this negative number to an unsigned long
causes sign extension, because long
has more bits than int
, and it translates the negative number into a large positive number by taking its modulus with 264, which is the rule for signed-to-unsigned conversion.
Presumably you're interested in why this: unsigned long int uwantsum = 1 << 31;
produces a "strange" value.
The problem is pretty simple: 1 is a plain int
, so the shift is done on a plain int
, and only after it's complete is the result converted to unsigned long
.
In this case, however, 1<<31
overflows the range of a 32-bit signed int, so the result is undefined1. After conversion to unsigned, the result remains undefined.
That said, in most typical cases, what's likely to happen is that 1<<31
will give a bit pattern of 10000000000000000000000000000000
. When viewed as a signed 2's complement2 number, this is -2147483648. Since that's negative, when it's converted to a 64-bit type, it'll be sign extended, so the top 32 bits will be filled with copies of what's in bit 31. That gives: 1111111111111111111111111111111110000000000000000000000000000000
(33 1-bits followed by 31 0-bits).
If we then treat that as an unsigned 64-bit number, we get 18446744071562067968.
The value of E1 << E2 is E1 left-shifted E2 bit positions; vacated bits are zero-filled. If E1 has an unsigned type, the value of the result is E1 × 2E2, reduced modulo one more than the maximum value representable in the result type. Otherwise, if E1 has a signed type and non-negative value, and E1×2E2 is representable in the corresponding unsigned type of the result type, then that value, converted to the result type, is the resulting value; otherwise, the behavior is undefined.
It's not "bizarre".
Try printing the number in hex and see if it's any more recognizable:
std::cout << std::hex << i << std::endl;
And always remember to qualify your literals with "U", "L" and/or "LL" as appropriate:
http://en.cppreference.com/w/cpp/language/integer_literal
unsigned long long l1 = 18446744073709550592ull;
unsigned long long l2 = 18'446'744'073'709'550'592llu;
unsigned long long l3 = 1844'6744'0737'0955'0592uLL;
unsigned long long l4 = 184467'440737'0'95505'92LLU;
I think it is compiler dependent .
It gives same value
2147483648
2147483648
on my machiene (g++) .
Proof : http://ideone.com/cvYzxN
And if overflow is there , then because uwantsum
is unsigned long int
and unsigned values are ALWAYS positive , conversion is done from signed to unsigned by using (uwantsum)%2^64
.
Hope this helps !
Its in the way you printed it out. using formar specifier %lu should represent a proper long int