for every int x: x+1 > x … is this always true?

后端 未结 4 1897
鱼传尺愫
鱼传尺愫 2021-01-18 00:27

I\'m just starting to learn C at school, I\'m trying to get a hold of the basic concepts.

Our homework has a question,

for every int x: x+1 > x

相关标签:
4条回答
  • 2021-01-18 00:47

    Note for 32-bit number range is [-2147483648, 2147483647] that is equals to [-231, 231 -1 ].

    So for expression x+1 > x is true for [-2147483648, 2147483646]

    But not for 2147483647 because adding to 2147483647 in 32-bit size number causes bit overflow many implementations it makes x + 1 to -2147483648 But really behavior is Undefined in C standard.

    So,

    • x + 1 > x True for x in [-2147483648, 2147483646] only
    • x + 1 > x , for x = 2147483647 is Undefined value may be True or False depends on compiler. If a compiler calculates = -2147483648 value will be False.
    0 讨论(0)
  • 2021-01-18 00:58

    Yes, x + 1 adds to the decimal value of 1.

    This will be true almost all of the time. But if you add 1 to INT_MAX (which is 215 - 1 or greater), you might flip the sign. Think about the decimal representation of 0111111 versus 11111111. (Obviously not 32 bits, but the ideas hold.)

    Look up two's complement if you're confused about why it flips. It's a pretty clever implementation of integers that makes addition easy.

    EDIT: INT_MAX + 1 is undefined behavior. Doesn't necessarily become INT_MIN. But since x + 1 is not necessarily > x when x == INT_MAX, then the answer is clearly false!

    0 讨论(0)
  • 2021-01-18 00:59
    x + 1 > x
    

    is 1 for every int value except for value INT_MAX where INT_MAX + 1 is an overflow and therefore x + 1 > x expression is undefined behavior for x value of INT_MAX.

    This actually means a compiler has the right to optimize out the expression:

    x + 1 > x
    

    by

    1
    

    As INT_MAX + 1 is undefined behavior, the compiler has the right to say that for this specific > expression INT_MAX + 1 is > INT_MAX.

    As the x + 1 > x expression is undefined behavior for x == INT_MAX, it is also not safe to assume x + 1 > x can be false (0).

    Note that if x was declared as an unsigned int instead of an int the situation is completely different. unsigned int operands never overflow (they wrap around): UINT_MAX + 1 == 0 and therefore x + 1 > x is 0 for x == UINT_MAX and 1 for all the other x values.

    Modern compilers (like gcc) usually take the opportunity to optimize this expression and replace it with 1.

    For the record, there was some serious security issues with known server programs using code like:

     if (ptr + offset < ptr)
    

    The code was meant to trigger a safety condition but the compiler would optimize out the if statement (by replacing the expression with 0) and it allowed an attacker to gain privilege escalation in the server program (by opening the possibility of an exploitable buffer overflow if I remember correctly).

    0 讨论(0)
  • 2021-01-18 00:59

    I don't want to hand you the answer, so I'll reply with a question that should get you on the right track.

    What is x + 1 when x is the largest possible value that can be stored in a 32-bit signed integer? (2,147,483,647)

    0 讨论(0)
提交回复
热议问题