Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1654
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  野趣味
    野趣味 (楼主)
    2021-02-06 21:35

    Actually, this is what happens in the hardware.

    The 2 is also passed into the FPU as a floating point number, with a mantissa of 1.0 and an exponent of 2^1. For the multiplication, the exponents are added, and the mantissas multiplied.

    Given that there is dedicated hardware to handle the complex case (multiplying with values that are not powers of two), and the special case is not handled any worse than it would be using dedicated hardware, there is no point in having additional circuitry and instructions.

提交回复
热议问题