What is the difference between Int32
and UInt32
?
If they are the same with capacity range capabilities, the question is for what reason U
There is no difference between them.
The difference is how it is represented, such as via printing to terminal.
For example, and sticking with 8 bit values for simplicity:
0xff
, and -1 is also 0xff
.0xfe
. Well, if we convert that to unsigned math, then, it becomes 0xff + 0xff = 0xfe.So you see, there is no difference between signed and unsigned, it's only how we represent them in the end that makes the difference, and in this case, the type indicates how it is represented.
Signedness becomes important when the compiler has to cast from a smaller size to a bigger via sign extension. So in this case, it's important to indicate a type so that the compiler can do the right thing.