I understand that character variable holds from (signed)-128 to 127 and (unsigned)0 to 255
char x;
x = 128;
printf(\"%d\\n\", x);
But how doe
Lets look at the binary representation of 128
when stored into 8 bits:
1000 0000
And now let's look at the binary representation of -128
when stored into 8 bits:
1000 0000
The standard for char
with your current setup looks to be a signed char
(note this isn't in the c standard, look here if you don't believe me) and thus when you're assigning the value of 128
to x
you're assigning it the value 1000 0000
and thus when you compile and print it out it's printing out the signed value of that binary representation (meaning -128
).
It turns out my environment is the same in assuming char
is actually signed char
. As expected if I cast x
to be an unsigned char
then I get the expected output of 128
:
#include
#include
int main() {
char x;
x = 128;
printf("%d %d\n", x, (unsigned char)x);
return 0;
}
gives me the output of -128 128
Hope this helps!