Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1640
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  囚心锁ツ
    2021-02-06 21:29

    It may be useful for embedded systems compilers to have special scale-by-power-of-two pseudo-op which could be translated by the code generator in whatever fashion was optimal for the machine in question, since on some embedded processors focusing on the exponent may be an order of magnitude faster than doing a full power-of-two multiplication, but on the embedded micros where multiplication is slowest, a compiler could probably achieve a bigger performance boost by having the floating-point-multiply routine check its arguments at run-time so as to skip over parts of the mantissa that are zero.

提交回复
热议问题