Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1658
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  庸人自扰
    2021-02-06 21:45

    If you think that multiplying by two means increasing the exponent by 1, think again. Here are the possible cases for IEEE 754 floating-point arithmetic:

    Case 1: Infinity and NaN stay unchanged.

    Case 2: Floating-point numbers with the largest possible exponent are changed to Infinity by increasing the exponent and setting the mantissa except for the sign bit to zero.

    Case 3: Normalised floating-point numbers with exponent less than the maximum possible exponent have their exponent increased by one. Yippee!!!

    Case 4: Denormalised floating-point numbers with the highest mantissa bit set have their exponent increased by one, turning them into normalised numbers.

    Case 5: Denormalised floating-point numbers with the highest mantissa bit cleared, including +0 and -0, have their mantissa shifted to the left by one bit position, leaving the exponent unchanged.

    I very much doubt that a compiler producing integer code handling all these cases correctly will be anywhere as fast as the floating-point built into the processor. And it's only suitable for multiplication by 2.0. For multiplication by 4.0 or 0.5, a whole new set of rules applies. And for the case of multiplication by 2.0, you might try to replace x * 2.0 with x + x, and many compilers do this. That is they do it, because a processor might be able for example to do one addition and one multiplication at the same time, but not one of each kind. So sometimes you would prefer x * 2.0, and sometimes x + x, depending on what other operations need doing at the same time.

提交回复
热议问题