While testing the performance of floats in .NET, I stumbled unto a weird case: for certain values, multiplication seems way slower than normal. Here is the test case:
As others have mentioned, various processors do not support normal-speed calculations when subnormal floating-point values are involved. This is either a design defect (if the behavior impairs your application or is otherwise troublesome) or a feature (if you prefer the cheaper processor or alternative use of silicon that was enabled by not using gates for this work).
It is illuminating to understand why there is a transition at .5:
Suppose you are multiplying by p. Eventually, the value becomes so small that the result is some subnormal value (below 2-126 in 32-bit IEEE binary floating point). Then multiplication becomes slow. As you continue multiplying, the value continues decreasing, and it reaches 2-149, which is the smallest positive number that can be represented. Now, when you multiply by p, the exact result is of course 2-149p, which is between 0 and 2-149, which are the two nearest representable values. The machine must round the result and return one of these two values.
Which one? If p is less than ½, then 2-149p is closer to 0 than to 2-149, so the machine returns 0. Then you are not working with subnormal values anymore, and multiplication is fast again. If p is greater than ½, then 2-149p is closer to 2-149 than to 0, so the machine returns 2-149, and you continue working with subnormal values, and multiplication remains slow. If p is exactly ½, the rounding rules say to use the value that has zero in the low bit of its significand (fraction portion), which is zero (2-149 has 1 in its low bit).
You report that .99f appears fast. This should end with the slow behavior. Perhaps the code you posted is not exactly the code for which you measured fast performance with .99f? Perhaps the starting value or the number of iterations were changed?
There are ways to work around this problem. One is that the hardware has mode settings that specify to change any subnormal values used or obtained to zero, called “denormals as zero” or “flush to zero” modes. I do not use .NET and cannot advise you about how to set these modes in .NET.
Another approach is to add a tiny value each time, such as
n = (n+e) * param;
where e
is at least 2-126/param
. Note that 2-126/param
should be calculated rounded upward, unless you can guarantee that n
is large enough that (n+e) * param
does not produce a subnormal value. This also presumes n
is not negative. The effect of this is to make sure the calculated value is always large enough to be in the normal range, never subnormal.
Adding e
in this way of course changes the results. However, if you are, for example, processing audio with some echo effect (or other filter), then the value of e
is too small to cause any effects observable by humans listening to the audio. It is likely too small to cause any change in the hardware behavior when producing the audio.