I have a question about the ranges of ints and floats:
If they both have the same size of 4 bytes, why do they have different ranges?
They have different ranges of values because their contents are interpreted differently; in other words, they have different representations.
Floats and doubles are typically represented as something like
+-+-------+------------------------+
| | | |
+-+-------+------------------------+
^ ^ ^
| | |
| | +--- significand
| +-- exponent
|
+---- sign bit
where you have 1 bit to represent the sign s (0 for positive, 1 for negative), some number of bits to represent an exponent e, and the remaining bits for a significand, or fraction f. The value is being represented is s * f * 2e.
The range of values that can be represented is determined by the number of bits in the exponent; the more bits in the exponent, the wider the range of possible values.
The precision (informally, the size of the gap between representable values) is determined by the number of bits in the significand. Not all floating-point values can be represented exactly in a given number of bits. The more bits you have in the significand, the smaller the gap between any two representable values.
Each bit in the significand represents 1/2n, where n is the bit number counting from the left:
110100...
^^ ^
|| |
|| +------ 1/2^4 = 0.0625
||
|+-------- 1/2^2 = 0.25
|
+--------- 1/2^1 = 0.5
------
0.8125
Here's a link everyone should have bookmarked: What Every Computer Scientist Should Know About Floating Point Arithmetic.