Since this question is about the increment operator and speed differences with prefix/postfix notation, I will describe the question very carefully lest Eric Lippert discover it
I love performance testing and I love fast programs so I admire your question.
I tried to reproduce your findings and failed. On my Intel i7 x64 system running your code samples on .NET4 framework in the x86|Release configuration, all four test cases produced roughly the same timings.
To do the test I created a brand new console application project and used the QueryPerformanceCounter API call to get a high-resolution CPU-based timer. I tried two settings for jmax
:
jmax = 1000
jmax = 1000000
because locality of the array can often make a big difference in how the performance behaves and the size of the of loop increases. However, both array sizes behaved the same in my tests.
I have done a lot of performance optimization and one of the things that I have learned is that you can very easily optimize an application so that it runs faster on one particular computer while inadvertently causing it to run slower on another computer.
I am not talking hypothetically here. I have tweaked inner loops and poured hours and days of work to make a program run faster, only to have my hopes dashed because I was optimizing it on my workstation and the target computer was a different model of Intel processor.
So the moral of this story is:
This is why some compilers have special optimization switches for different processors or some applications come in different versions even though one version could easily run on all supported hardware.
So if you are going to do testing like this, you have to do it same way that JIT compiler writers do: you have to perform your tests on a wide variety of hardware and then choose a blend, a happy-medium that gives the best performance on the most ubiquitous hardware.
OK after much research (sad I know!), I think have answered my own question:
The answer is Maybe. Apparently the JIT compilers do look for patterns (see http://blogs.msdn.com/b/clrcodegeneration/archive/2009/08/13/array-bounds-check-elimination-in-the-clr.aspx) to decide when and how array bounds checking can be optimized but whether it is the same pattern I was guessing at or not I don't know.
In this case, it is a moot point because the relative speed increase of (2) was due to something more than that. Turns out that the x64 JIT compiler is clever enough to work out whether an array length is constant (and seemingly also a multiple of the number of unrolls in a loop): So the code was only bounds checking at the end of each iteration and the each unroll became just:-
total += intArray[j]; j++;
00000081 8B 44 0B 10 mov eax,dword ptr [rbx+rcx+10h]
00000085 03 F0 add esi,eax
I proved this by changing the app to let the array size be specified on the command line and seeing the different assembler output.
Other things discovered during this excercise:-
Interesting results. What I would do is:
And then you'll know whether the jitter is doing a better job with one than the other. The jitter might, for example, be realizing that in one case it can remove array bounds checks, but not realizing that in the other case. I don't know; I'm not an expert on the jitter.
The reason for all the rigamarole is because the jitter may generate different code when the debugger is attached. If you want to know what it does under normal circumstances then you have to make sure the code gets jitted under normal, non-debugger circumstances.