Signed int range confusion

后端 未结 5 854
终归单人心
终归单人心 2021-01-29 09:10

This question might be very basic but i post here only after days of googling and for my proper basic understanding of signed integers in C.

Actually some say signed int

相关标签:
5条回答
  • 2021-01-29 09:24

    The value 32767 is the maximum positive value you can represent on a signed 16-bit integer. The C corresponding type is short.

    The int type is represented on at least the same number of bytes as short and at most the same number of bytes as long. The size of int on 16-bit processors is 2 bytes (the same as short). On 32-bit and higher architecture, the size of int is 4 bytes (the same as long).

    No matter the architecture, the minumum value of int is INT_MIN and the maximum value of int is INT_MAX.

    Similar, there are constants to get the minimum and maximum values for short (SHRT_MIN and SHRT_MAX), long, char etc. You don't need to use hardcoded constants or to guess what is the minimum value for int on your system.


    The representation #1 is named "sign and magnitude representation". It is a theoretical model that uses the most significant byte to store the sign and the rest of the bytes to store the absolute value of the number. It was used by some early computers, probably because it seemed a natural map of the numbers representation in mathematics. However, it is not natural for binary computers.

    The representation #2 is named two's complement. The two's-complement system has the advantage that the fundamental arithmetic operations of addition, subtraction, and multiplication are identical to those for unsigned binary numbers (as long as the inputs are represented in the same number of bits and any overflow beyond those bits is discarded from the result). This is why it is the preferred encoding nowadays.

    0 讨论(0)
  • 2021-01-29 09:30

    It depends on your environment and typically int can store -2147483648 to 2147483647 if it is 32-bit long and two's complement is used, but C specification says that int can store at least -32767 to 32767.

    Quote from N1256 5.2.4.2.1 Sizes of integer types <limits.h>

    Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

    — minimum value for an object of type int
    INT_MIN -32767 // −(2 15 − 1)
    — maximum value for an object of type int
    INT_MAX +32767 // 2 15 − 1`

    0 讨论(0)
  • 2021-01-29 09:38

    The C standard specifies the lowest limits for integer values. As it is written in the Standard (5.2.4.2.1 Sizes of integer types )

    1. ...Their implementation-defined values shall be equal or greater in magnitude (absolute value) to those shown, with the same sign.

    For objects of type int these lowest limits are

    — minimum value for an object of type int

    INT_MIN -32767 // −(215 − 1)
    

    — maximum value for an object of type int

    INT_MAX +32767 // 215 − 1
    

    For the two's complement representation of integers the number of positive values is one less than the number of negative values. So if only tow bytes are used for representations of objects of type int then INT_MIN will be equal to -32768.

    Take into account that 32768 in magnitude is greater than the value used in the Standard. So it satisfies the Standard requirement.

    On the other habd for the representation "sign and magnitude" the limits (when 2 bytes are used) will be the same as shown in the Standard that is -32767:32767

    So the actual limits used in the implementation depend on the width of integers and their representation.

    0 讨论(0)
  • 2021-01-29 09:44

    My doubt is what is the actual range of signed int in c ,1) [-32767 to 32767] or 2) [-32768 to 32767]?

    The whole point of C and its advantage of high portability to old and new platforms is that code should not care.

    C defines the range of int with 2 macros: INT_MIN and INT_MAX. The C spec specifies:
    INT_MIN is -32,767 or less.
    INT_MAX is +32,767 or more.

    If code needs a 16-bit 2's complement type, use int16_t. If code needs a 32-bit or wider type, use long or int32least_t, etc. Do not code assuming int is something that it is not defined to be.

    0 讨论(0)
  • 2021-01-29 09:49

    Today, signed ints are usually done in two's complement notation.

    The highest bit is the "sign bit", it is set for all negative numbers.

    This means you have seven bits to represent different values.

    With the highest bit unset, you can (with 16 bits total) represent the values 0..32767.

    With the highest bit set, and because you already have a representation for zero, you can represent the values -1..-32768.

    This is, however, implementation-defined, other representations do exist as well. The actual range limits for signed integers on your platform / for your compiler are the ones found in your environment's <limits.h>. That is the only definite authority.

    On today's desktop systems, an int is usually 32 or 64 bits wide, for a correspondingly much larger range than the 16-bit 32767 / 32768 you are talking of. So either those people are talking about really old platforms, really old knowledge, embedded systems, or the minimum guaranteed range -- the standard states that INT_MIN must be at least -32767, and INT_MAX be at least +32767, the lowest common denominator.

    0 讨论(0)
提交回复
热议问题