I\'ve often noticed gcc converting multiplications into shifts in the executable. Something similar might happen when multiplying an int
and a float
. F
It's not about compilers or compiler writers not being smart. It's more like obeying standards and producing all the necessary "side effects" such as Infs, Nans, and denormals.
Also it can be about not producing other side effects that are not called for, such as reading memory. But I do recognize that it can be faster in some circumstances.