Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1655
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  野趣味
    野趣味 (楼主)
    2021-02-06 21:32

    It's not about compilers or compiler writers not being smart. It's more like obeying standards and producing all the necessary "side effects" such as Infs, Nans, and denormals.

    Also it can be about not producing other side effects that are not called for, such as reading memory. But I do recognize that it can be faster in some circumstances.

提交回复
热议问题