Why doesn't a compiler optimize floating-point *2 into an exponent increment?

前端 未结 9 1605
悲&欢浪女
悲&欢浪女 2021-02-06 20:45

I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int and a float. F

9条回答
  •  不思量自难忘°
    2021-02-06 21:30

    Here's an actual compiler optimization I'm seeing with GCC 10:

    x = 2.0 * hi * lo;
    

    Generates this code:

    mulsd   %xmm1, %xmm0      # x = hi * lo;
    addsd   %xmm0, %xmm0      # x += x;
    

提交回复
热议问题