问题
Could someone please explain the two to me because I have to give an explanation of them both in my assignment.
I know what a normal integer is of course and have used the following to describe it:
"An integer is a whole number which can be positive, negative and zero but it cannot have decimal points."
But I'm just not sure about signed and unsigned.
Thanks
回答1:
In most languages when you declare an integer, you are declaring a signed integer. If you want to declare an unsigned integer you have to specifically tell the compiler. e.g. in c#
int a; // Signed int
uint b; // Unsigned int.
The difference is that in a signed int, one of the bits is used to indicate if the number is positive or negative. In and unsigned int, that bit is used to hold a value. The effect is that you can hold twice the absolute values in an unsigned in as you can in a signed int. Or more plainly, the range of the C# integer is -2,147,483,648 to 2,147,483,647 while the range of the uint is 0 to 4,294,967,295. Both data types are 32 bit datatypes.
回答2:
The difference between a signed and unsigned integer is that one byte of the integer is required to hold the sign.
For instance, with two binary digits you can have the following:
Base 2 Base 10
00 0
01 1
10 2
11 3
However, if we take the first digit to mean negative (0) or positive (1)
Base 2 Base 10
01 -1
00 0
10 1
11 2
Or, if we wanted 1 to be negatiave, and 0 to be positive:
Base 2 Base 10
01 1
00 0
10 -1
11 -2
For further reading, check out the Wikepedia article on Two's compliment
回答3:
Generally when you say int, it is signed int. (signed) int has a range of -32768 - 32767 But unsigned int has a range of 0 - 65535
An Unsigned Variable Type of int can hold zero and positive numbers but a signed int holds negative, zero or positive numbers.
来源:https://stackoverflow.com/questions/19032414/signed-and-unsigned-integers