Floating point numbers, like all numbers, must be stored in memory as a string of 0's and 1's. It's all bits to the computer. How floating point differs from integer is in how we interpret the 0's and 1's when we want to look at them.
One bit is the "sign" (0 = positive, 1 = negative), 8 bits are the exponent (ranging from -128 to +127), 23 bits are the number known as the "mantissa" (fraction). So the binary representation of (S1)(P8)(M23) has the value (-1^S)M*2^P
The "mantissa" takes on a special form. In normal scientific notation we display the "one's place" along with the fraction. For instance:
4.39 x 10^2 = 439
In binary the "one's place" is a single bit. Since we ignore all the left-most 0's in scientific notation (we ignore any insignificant figures) the first bit is guaranteed to be a 1
1.101 x 2^3 = 1101 = 13
Since we are guaranteed that the first bit will be a 1, we remove this bit when storing the number to save space. So the above number is stored as just 101 (for the mantissa). The leading 1 is assumed
As an example, let's take the binary string
00000010010110000000000000000000
Breaking it into it's components:
Sign Power Mantissa
0 00000100 10110000000000000000000
+ +4 1.1011
+ +4 1 + .5 + .125 + .0625
+ +4 1.6875
Applying our simple formula:
(-1^S)M*2^P
(-1^0)(1.6875)*2^(+4)
(1)(1.6875)*(16)
27
In other words, 00000010010110000000000000000000 is 27 in floating point (according to IEEE-754 standards).
For many numbers there is no exact binary representation, however. Much like how 1/3 = 0.333.... repeating forever, 1/100 is 0.00000010100011110101110000..... with a repeating "10100011110101110000". A 32-bit computer can't store the entire number in floating point, however. So it makes its best guess.
0.0000001010001111010111000010100011110101110000
Sign Power Mantissa
+ -7 1.01000111101011100001010
0 -00000111 01000111101011100001010
0 11111001 01000111101011100001010
01111100101000111101011100001010
(note that negative 7 is produced using 2's complement)
It should be immediately clear that 01111100101000111101011100001010 looks nothing like 0.01
More importantly, however, this contains a truncated version of a repeating decimal. The original decimal contained a repeating "10100011110101110000". We've simplified this to 01000111101011100001010
Translating this floating point number back into decimal via our formula we get 0.0099999979 (note that this is for a 32-bit computer. A 64-bit computer would have much more accuracy)
A Decimal Equivalent
If it helps to understand the problem better, let's look decimal scientific notation when dealing with repeating decimals.
Let's assume that we have 10 "boxes" to store digits. Therefore if we wanted to store a number like 1/16 we would write:
+---+---+---+---+---+---+---+---+---+---+
| + | 6 | . | 2 | 5 | 0 | 0 | e | - | 2 |
+---+---+---+---+---+---+---+---+---+---+
Which is clearly just 6.25 e -2
, where e
is shorthand for *10^(
. We've allocated 4 boxes for the decimal even though we only needed 2 (padding with zeroes), and we've allocated 2 boxes for signs (one for the sign of the number, one of the sign of the exponent)
Using 10 boxes like this we can display numbers ranging from -9.9999 e -9
to +9.9999 e +9
This works fine for anything with 4 or fewer decimal places, but what happens when we try to store a number like 2/3
?
+---+---+---+---+---+---+---+---+---+---+
| + | 6 | . | 6 | 6 | 6 | 7 | e | - | 1 |
+---+---+---+---+---+---+---+---+---+---+
This new number 0.66667
does not exactly equal 2/3
. In fact, it's off by 0.000003333...
. If we were to try and write 0.66667
in base 3, we would get 0.2000000000012... instead of 0.2
This problem may become more apparent if we take something with a larger repeating decimal, like 1/7
. This has 6 repeating digits: 0.142857142857...
Storing this into our decimal computer we can only show 5 of these digits:
+---+---+---+---+---+---+---+---+---+---+
| + | 1 | . | 4 | 2 | 8 | 6 | e | - | 1 |
+---+---+---+---+---+---+---+---+---+---+
This number, 0.14286
, is off by .000002857...
It's "close to correct", but it's not exactly correct, and so if we tried to write this number in base 7 we would get some hideous number instead of 0.1
. In fact, plugging this into Wolfram Alpha we get: .10000022320335...
These minor fractional differences should look familiar to your 0.0099999979
(as opposed to 0.01
)