I understand that character variable holds from (signed)-128 to 127 and (unsigned)0 to 255
char x;
x = 128;
printf(\"%d\\n\", x);
But how doe
Lets look at the binary representation of 128
when stored into 8 bits:
1000 0000
And now let's look at the binary representation of -128
when stored into 8 bits:
1000 0000
The standard for char
with your current setup looks to be a signed char
(note this isn't in the c standard, look here if you don't believe me) and thus when you're assigning the value of 128
to x
you're assigning it the value 1000 0000
and thus when you compile and print it out it's printing out the signed value of that binary representation (meaning -128
).
It turns out my environment is the same in assuming char
is actually signed char
. As expected if I cast x
to be an unsigned char
then I get the expected output of 128
:
#include <stdio.h>
#include <stdlib.h>
int main() {
char x;
x = 128;
printf("%d %d\n", x, (unsigned char)x);
return 0;
}
gives me the output of -128 128
Hope this helps!
printf
is a variadic function, only providing an exact type for the first argument.
That means the default promotions are applied to the following arguments, so all integers of rank less than int
are promoted to int
or unsigned int
, and all floating values of rank smaller double
are promoted to double
.
If your implementation has CHAR_BIT
of 8, and simple char
is signed and you have an obliging 2s-complement implementation, you thus get
128 (literal) to -128 (char
/signed char
) to -128 (int
) printed as int
=> -128
If all the listed condition but obliging 2s complement implementation are fulfilled, you get a signal or some implementation-defined value.
Otherwise you get output of 128, because 128 fits in char
/ unsigned char
.
Standard quote for case 2 (Thanks to Matt for unearthing the right reference):
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
2 Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.60)
3 Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
This all has nothing to do with variadic functions, default argument promotions etc.
Assuming your system has signed chars, then x = 128;
is performing an out-of-range assignment. The behaviour of this is implementation-defined ; meaning that the compiler may choose an action but it must document what it does (and therefore, do it reliably). This action is allowed to include raising a signal.
The usual behaviour that modern compilers do for out-of-range assignment is to truncate the representation of the value to fit in the destination type.
In binary representation, 128 is 000....00010000000
.
Truncating this into a signed char gives the signed char of binary representation 10000000
. In two's complement representation, which is used by all modern C systems for negative numbers, this is the representation of the value -128
. (For historical curiousity: in one's complement this is -127
, and in sign-magnitude, this is -0
which may be a trap representation and thus raise a signal).
Finally, printf
accurately prints out this char's value of -128
. The %d
modifier works for char
because of the default argument promotions and the facts that INT_MIN <= CHAR_MIN
and INT_MAX >= CHAR_MAX
.; this behaviour is guaranteed except on systems which have plain char as unsigned, and sizeof(int)==1
(which do exist but you'd know about it if you were on one).