问题
I was writing an instructive example for a colleague to show him why testing floats for equality is often a bad idea. The example I went with was adding .1 ten times, and comparing against 1.0 (the one I was shown in my introductory numerical class). I was surprised to find that the two results were equal (code + output).
float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
@float += 0.1f;
}
Console.WriteLine(@float == 1.0f);
Some investigation showed that this result could not be relied upon (much like float equality). The one I found most surprising was that adding code after the other code could change the result of the calculation (code + output). Note that this example has exactly the same code and IL, with one more line of C# appended.
float @float = 0.0f;
for(int @int = 0; @int < 10; @int += 1)
{
@float += 0.1f;
}
Console.WriteLine(@float == 1.0f);
Console.WriteLine(@float.ToString("G9"));
I know I'm not supposed to use equality on floats and thus shouldn't care too much about this, but I found it to be quite surprising, as have about everyone I've shown this to. Doing stuff after you've performed a calculation changes the value of the preceding calculation? I don't think that's the model of computation people usually have in their minds.
I'm not totally stumped, it seems safe to assume that there's some kind of optimization occurring in the "equal" case that changes the result of the calculation (building in debug mode prevents the "equal" case). Apparently, the optimization is abandoned when the CLR finds that it will later need to box the float.
I've searched a bit but couldn't find a reason for this behavior. Can anyone clue me in?
回答1:
This is a side effect of the way the JIT optimizer works. It does more work if there is less code to generate. The loop in your original snippet gets compiled to this:
@float += 0.1f;
0000000f fld dword ptr ds:[0025156Ch] ; push(intermediate), st0 = 0.1
00000015 faddp st(1),st ; st0 = st0 + st1
for (int @int = 0; @int < 10; @int += 1) {
00000017 inc eax
00000018 cmp eax,0Ah
0000001b jl 0000000F
When you add the extra Console.WriteLine() statement, it compiles it to this:
@float += 0.1f;
00000011 fld dword ptr ds:[00961594h] ; st0 = 0.1
00000017 fadd dword ptr [ebp-8] ; st0 = st0 + @float
0000001a fstp dword ptr [ebp-8] ; @float = st0
for (int @int = 0; @int < 10; @int += 1) {
0000001d inc eax
0000001e cmp eax,0Ah
00000021 jl 00000011
Note the difference at address 15 vs address 17+1a, the first loop keeps the intermediate result in the FPU. The second loop stores it back to the @float local variable. While it stays inside the FPU, the result is calculated with full precision. Storing it back however truncates the intermediate result back to a float, losing lots of bits of precision in the process.
While unpleasant, I don't believe this is a bug. The x64 JIT compiler behaves differently yet. You can make your case at connect.microsoft.com
回答2:
FYI, the C# spec notes that this behaviour is legal and common. See these questions for more details and simmilar scenarios:
Why does this floating-point calculation give different results on different machines?
C# XNA Visual Studio: Difference between "release" and "debug" modes?
回答3:
Did you run this on an Intel processor?
One theory is that the JIT allowed @float
to be accumulated entirely in a floating point register, which would be the full 80 bits precision. This way the calculation can be accurate enough.
The second version of the code did not fit into registers entirely and so @float
had to be "spilled" to memory, which causes the 80 bit value to be rounded down to single precision, giving the results expected from single precision arithmetic.
But that's just a very random guess. One would have to inspect the actual machine code generated by the JIT compiler (debug with disassembly view open).
Edit:
Hm... I tested your code locally (Intel Core 2, Windows 7 x64, 64-bit CLR) and I always got the "expected" rounding error. Both in release and debug configuration.
The following is the disassembly Visual Studio displays for the first code snippet on my machine:
xorps xmm0,xmm0
movss dword ptr [rsp+20h],xmm0
for (int @int = 0; @int < 10; @int += 1)
mov dword ptr [rsp+24h],0
jmp 0000000000000061
{
@float += 0.1f;
movss xmm0,dword ptr [000000A0h]
addss xmm0,dword ptr [rsp+20h]
movss dword ptr [rsp+20h],xmm0 // <-- @float gets stored in memory
for (int @int = 0; @int < 10; @int += 1)
mov eax,dword ptr [rsp+24h]
add eax,1
mov dword ptr [rsp+24h],eax
cmp dword ptr [rsp+24h],0Ah
jl 0000000000000042
}
Console.WriteLine(@float == 1.0f);
etc.
There are differences between the x64 and the x86 JIT compiler, but I don't have access to a 32bit machine.
回答4:
My theory that without the ToString line, the compiler is able to statically optimize the function down into a single value and that it somehow compensates for the floating point error. But when the ToString line is added, the optimizer has to treat the float differently because it is required by the method call. That's just a guess.
来源:https://stackoverflow.com/questions/2225503/clr-jit-optimizations-violates-causality