I have a question about the ranges of ints and floats:
If they both have the same size of 4 bytes, why do they have different ranges?
You are mixing the representation of a number, which is dependent upon some rules you (or somebody else) defines, and the way you use to keep the number in the computer (the bytes).
For example, you can use only one bit to keep a number, and decide that 0
represents -100, and 1
represents +100. Or that 0
represents .5 and 1
represents 1.0. The 2 things, the data and the meaning of the data, are independent.
They have different ranges of values because their contents are interpreted differently; in other words, they have different representations.
Floats and doubles are typically represented as something like
+-+-------+------------------------+
| | | |
+-+-------+------------------------+
^ ^ ^
| | |
| | +--- significand
| +-- exponent
|
+---- sign bit
where you have 1 bit to represent the sign s (0 for positive, 1 for negative), some number of bits to represent an exponent e, and the remaining bits for a significand, or fraction f. The value is being represented is s * f * 2e.
The range of values that can be represented is determined by the number of bits in the exponent; the more bits in the exponent, the wider the range of possible values.
The precision (informally, the size of the gap between representable values) is determined by the number of bits in the significand. Not all floating-point values can be represented exactly in a given number of bits. The more bits you have in the significand, the smaller the gap between any two representable values.
Each bit in the significand represents 1/2n, where n is the bit number counting from the left:
110100...
^^ ^
|| |
|| +------ 1/2^4 = 0.0625
||
|+-------- 1/2^2 = 0.25
|
+--------- 1/2^1 = 0.5
------
0.8125
Here's a link everyone should have bookmarked: What Every Computer Scientist Should Know About Floating Point Arithmetic.
Two types with the same size in bytes can have different ranges for sure.
For example, signed int and unsigned int are both 4 bytes, but one has one of its 32 bits reserved for the sign, which lowers the maximum value by a factor of 2 by default. Also, the range is different because the one can be negative. Floats on the other hand lose value range in favor of using some bits for decimal range.
They are totally different - typically int
is just a straightforward 2's complement signed integer, while float
is a single precision floating point representation with 23 bits of mantissa, 8 bits exponent and 1 bit sign (see http://en.wikipedia.org/wiki/IEEE_754-2008).
An integer is just a number... It's range depends on the number of bits (different for a signed or unsigned integer).
A floating point number is a whole different thing. It's just a convention about representing a floating point number in binary...
It's coded with a sign bit, an exponent field, and a mantissa.
Read the following article:
http://www.eosgarden.com/en/articles/float/
It will make you understand what are floating point values, from a binary perspective. The you'll understand the range thing...
The standard does not specify the size in bytes, but it specifies minimum ranges that various integral types must be able to hold. You can infer minimum size in bytes from it.
Minimum ranges guaranteed by the standard (from "Integer Types In C and C++"):
signed char: -127 to 127
unsigned char: 0 to 255
"plain" char: -127 to 127 or 0 to 255 (depends on default char signedness)
signed short: -32767 to 32767
unsigned short: 0 to 65535
signed int: -32767 to 32767
unsigned int: 0 to 65535
signed long: -2147483647 to 2147483647
unsigned long: 0 to 4294967295
signed long long: -9223372036854775807 to 9223372036854775807
unsigned long long: 0 to 18446744073709551615
Actual platform-specific range values are found in in C, or in C++ (or even better, templated std::numeric_limits in header).
Standard only requires that:
sizeof(short int) <= sizeof(int) <= sizeof(long int)
float
does not have the same "resolution" as an int
despite their seemingly similar size. int
is 2's complement whereas float
is made up of 23 bits Mantissa, 8 bits of exponent, and 1 bit of sign.