CLR JIT optimizations violates causality?

馋奶兔 提交于 2019-11-28 01:49:23

This is a side effect of the way the JIT optimizer works. It does more work if there is less code to generate. The loop in your original snippet gets compiled to this:

                @float += 0.1f;
0000000f  fld         dword ptr ds:[0025156Ch]          ; push(intermediate), st0 = 0.1
00000015  faddp       st(1),st                          ; st0 = st0 + st1
            for (int @int = 0; @int < 10; @int += 1) {
00000017  inc         eax  
00000018  cmp         eax,0Ah 
0000001b  jl          0000000F 

When you add the extra Console.WriteLine() statement, it compiles it to this:

                @float += 0.1f;
00000011  fld         dword ptr ds:[00961594h]          ; st0 = 0.1
00000017  fadd        dword ptr [ebp-8]                 ; st0 = st0 + @float
0000001a  fstp        dword ptr [ebp-8]                 ; @float = st0
            for (int @int = 0; @int < 10; @int += 1) {
0000001d  inc         eax  
0000001e  cmp         eax,0Ah 
00000021  jl          00000011 

Note the difference at address 15 vs address 17+1a, the first loop keeps the intermediate result in the FPU. The second loop stores it back to the @float local variable. While it stays inside the FPU, the result is calculated with full precision. Storing it back however truncates the intermediate result back to a float, losing lots of bits of precision in the process.

While unpleasant, I don't believe this is a bug. The x64 JIT compiler behaves differently yet. You can make your case at connect.microsoft.com

Eric Lippert

FYI, the C# spec notes that this behaviour is legal and common. See these questions for more details and simmilar scenarios:

Did you run this on an Intel processor?

One theory is that the JIT allowed @float to be accumulated entirely in a floating point register, which would be the full 80 bits precision. This way the calculation can be accurate enough.

The second version of the code did not fit into registers entirely and so @float had to be "spilled" to memory, which causes the 80 bit value to be rounded down to single precision, giving the results expected from single precision arithmetic.

But that's just a very random guess. One would have to inspect the actual machine code generated by the JIT compiler (debug with disassembly view open).

Edit:

Hm... I tested your code locally (Intel Core 2, Windows 7 x64, 64-bit CLR) and I always got the "expected" rounding error. Both in release and debug configuration.

The following is the disassembly Visual Studio displays for the first code snippet on my machine:

xorps       xmm0,xmm0 
movss       dword ptr [rsp+20h],xmm0 
        for (int @int = 0; @int < 10; @int += 1)
mov         dword ptr [rsp+24h],0 
jmp         0000000000000061 
        {
            @float += 0.1f;
movss       xmm0,dword ptr [000000A0h] 
addss       xmm0,dword ptr [rsp+20h] 
movss       dword ptr [rsp+20h],xmm0 // <-- @float gets stored in memory
        for (int @int = 0; @int < 10; @int += 1)
mov         eax,dword ptr [rsp+24h] 
add         eax,1 
mov         dword ptr [rsp+24h],eax 
cmp         dword ptr [rsp+24h],0Ah 
jl          0000000000000042 
        }
        Console.WriteLine(@float == 1.0f);
etc.

There are differences between the x64 and the x86 JIT compiler, but I don't have access to a 32bit machine.

My theory that without the ToString line, the compiler is able to statically optimize the function down into a single value and that it somehow compensates for the floating point error. But when the ToString line is added, the optimizer has to treat the float differently because it is required by the method call. That's just a guess.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!