Typecasting int to char in printf() in C

前端 未结 4 453
南笙
南笙 2020-12-20 00:47
int *int_pointer = malloc(10);

*int_pointer = 53200;

printf(\"The integer at byte #0 is set to: %d \\n\", (char) *int_pointer);

RESULT: -48

相关标签:
4条回答
  • 2020-12-20 01:35

    Casting a data type into another data type that isn't wide enough to hold all possible values is undefined behaviour. With undefined behaviour, the compiler is free to do whatever it pleases, so typically it does whatever is the least effort for the implementors, since they are automatically right and you are always wrong.

    Therefore, you don't get to ask "Why did it happen?" - you should be glad you didn't get 52301, or 42, or "Help! I'm trapped in an integer library!" instead.

    0 讨论(0)
  • 2020-12-20 01:47

    I think @fritzone is correct..
    Since the range of the integers in the C is -32768 to 32767 and hence after 32767 it will go for -32768 rather than 32768 and hence it is printing -48 in the place of 53200.
    Try the value 53201 and it will print out the value -47 and so on..

    0 讨论(0)
  • 2020-12-20 01:50

    That's simple: 53200 = 0xCFD0 and 0xCF = 207 which for a signed char = -48 ...

    0 讨论(0)
  • 2020-12-20 01:52

    Language-lawyer perspective:

    I believe that correct reference in C99/C11 standard is §6.3.1.3 (with emphasis mine):

    1. When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
    2. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
    3. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

    Note that char type is problematic, since it's also implementation-defined if it's actually represented as signed or unsigned. The only way that is fully-defined by standard itself is cast to unsigned char type.

    Practical view:

    Assuming that for your implementation sizeof(int) == 4 and it uses two's complement method for storing signed integers, the number 53200 is represented as:

    0000 0000 0000 0000 1100 1111 1101 0000
    

    Note that if you have little-endian CPU (probably it's true assumption), then order of bytes, or more strictly how they are actually stored in memory and CPU registers is inverted,i.e. that number is stored as:

    0000 1101 1111 1100 0000 0000 0000 0000
    

    What (unsigned char) 53200 produces is result of (in purely mathematical sense) substraction as (note that standard guarantees that sizeof(unsigned char) == 1):

    53200 - 256 - 256 - ... - 256 = 53200 % 256 = 208
    

    which is in binary 1101 0000

    It can be mathematically proven that result is always the same as "cut-off", resting only last, least-significant byte as result of cast.

    Note for printf():

    As pointed by @pmg printf() is variadic function and due to default argument promotion its optional arguments of type unsigned char (or signed char and char as well) are always promoted to int, but this time it's "just a formality".

    Note for alternative biwise operator solution:

    As an alternative solution you can use biwise & and operator with proper mask to obtain least-significant byte of particular number, for example:

    *int_number & 0xff   /* mask is 0000 0000 0000 0000 1111 1111 */
    
    0 讨论(0)
提交回复
热议问题