I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int
and a float
. F
Common floating-point formats, particularly IEEE 754, do not store the exponent as a simple integer, and treating it as an integer will not produce correct results.
In 32-bit float or 64-bit double, the exponent field is 8 or 11 bits, respectively. The exponent codes 1 to 254 (in float) or 1 to 2046 (in double) do act like integers: If you add one to one of these values and the result is one of these values, then the represented value doubles. However, adding one fails in these situations:
(The above is for positive signs. The situation is symmetric with negative signs.)
As others have noted, some processors do not have facilities for manipulating the bits of floating-point values quickly. Even on those that do, the exponent field is not isolated from the other bits, so you typically cannot add one to it without overflowing into the sign bit in the last case above.
Although some applications can tolerate shortcuts such as neglecting subnormals or NaNs or even infinities, it is rare that applications can ignore zero. Since adding one to the exponent fails to handle zero properly, it is not usable.