C# vs C - Big performance difference

前端 未结 13 557
爱一瞬间的悲伤
爱一瞬间的悲伤 2020-11-28 01:39

I\'m finding massive performance differences between similar code in C anc C#.

The C code is:

#include 
#include 
#inclu         


        
相关标签:
13条回答
  • 2020-11-28 02:17

    It would seem to me that this is nothing to do with the languages themselves, rather it is to do with the different implementations of the square root function.

    0 讨论(0)
  • 2020-11-28 02:29

    If you just single-step the code at the assembly level, including stepping through the square-root routine, you will probably get the answer to your question.

    No need for educated guessing.

    0 讨论(0)
  • 2020-11-28 02:34

    I'll keep it brief, it is already marked answered. C# has the great advantage of having a well defined floating point model. That just happens to match the native operation mode of the FPU and SSE instruction set on x86 and x64 processors. No coincidence there. The JITter compiles Math.Sqrt() to a few inline instructions.

    Native C/C++ is saddled with years of backwards compatibility. The /fp:precise, /fp:fast and /fp:strict compile options are the most visible. Accordingly, it must call a CRT function that implements sqrt() and checks the selected floating point options to adjust the result. That's slow.

    0 讨论(0)
  • 2020-11-28 02:34

    Maybe the c# compiler is noticing you don't use root anywhere, so it just skips the whole for loop. :)

    That may not be the case, but I suspect whatever the cause is, it is compiler implementation dependent. Try compiling you C program with the Microsoft compiler (cl.exe, available as part of the win32 sdk) with optimizations and Release mode. I bet you'll see a perf improvement over the other compiler.

    EDIT: I don't think the compiler can just optimize out the for loop, because it would have to know that Math.Sqrt() doesn't have any side-effects.

    0 讨论(0)
  • 2020-11-28 02:37

    The other factor that may be an issue here is that the C compiler compiles to generic native code for the processor family you target, whereas the MSIL generated when you compiled the C# code is then JIT compiled to target the exact processor you have complete with any optimisations that may be possible. So the native code generated from the C# may be considerably faster than the C.

    0 讨论(0)
  • 2020-11-28 02:38

    You must be comparing debug builds. I just compiled your C code, and got

    Time elapsed: 0.000000
    

    If you don't enable optimizations, any benchmarking you do is completely worthless. (And if you do enable optimizations, the loop gets optimized away. So your benchmarking code is flawed too. You need to force it to run the loop, usually by summing up the result or similar, and printing it out at the end)

    It seems that what you're measuring is basically "which compiler inserts the most debugging overhead". And turns out the answer is C. But that doesn't tell us which program is fastest. Because when you want speed, you enable optimizations.

    By the way, you'll save yourself a lot of headaches in the long run if you abandon any notion of languages being "faster" than each others. C# no more has a speed than English does.

    There are certain things in the C language that would be efficient even in a naive non-optimizing compiler, and there are others that relies heavily on a compiler to optimize everything away. And of course, the same goes for C# or any other language.

    The execution speed is determined by:

    • the platform you're running on (OS, hardware, other software running on the system)
    • the compiler
    • your source code

    A good C# compiler will yield efficient code. A bad C compiler will generate slow code. What about a C compiler which generated C# code, which you could then run through a C# compiler? How fast would that run? Languages don't have a speed. Your code does.

    0 讨论(0)
提交回复
热议问题