I have a question about the ranges of ints and floats:
If they both have the same size of 4 bytes, why do they have different ranges?
You are mixing the representation of a number, which is dependent upon some rules you (or somebody else) defines, and the way you use to keep the number in the computer (the bytes).
For example, you can use only one bit to keep a number, and decide that 0
represents -100, and 1
represents +100. Or that 0
represents .5 and 1
represents 1.0. The 2 things, the data and the meaning of the data, are independent.