When is assembly faster than C?

前端 未结 30 2376
挽巷
挽巷 2020-12-02 03:18

One of the stated reasons for knowing assembler is that, on occasion, it can be employed to write code that will be more performant than writing that code in a higher-level

相关标签:
30条回答
  • 2020-12-02 03:53

    I have an operation of transposition of bits that needs to be done, on 192 or 256 bits every interrupt, that happens every 50 microseconds.

    It happens by a fixed map(hardware constraints). Using C, it took around 10 microseconds to make. When I translated this to Assembler, taking into account the specific features of this map, specific register caching, and using bit oriented operations; it took less than 3.5 microsecond to perform.

    0 讨论(0)
  • 2020-12-02 03:53

    gcc has become a widely used compiler. Its optimizations in general are not that good. Far better than the average programmer writing assembler, but for real performance, not that good. There are compilers that are simply incredible in the code they produce. So as a general answer there are going to be many places where you can go into the output of the compiler and tweak the assembler for performance, and/or simply re-write the routine from scratch.

    0 讨论(0)
  • 2020-12-02 03:54

    Matrix operations using SIMD instructions is probably faster than compiler generated code.

    0 讨论(0)
  • 2020-12-02 03:55

    Pretty much anytime the compiler sees floating point code, a hand written version will be quicker if you're using an old bad compiler. (2019 update: This is not true in general for modern compilers. Especially when compiling for anything other than x87; compilers have an easier time with SSE2 or AVX for scalar math, or any non-x86 with a flat FP register set, unlike x87's register stack.)

    The primary reason is that the compiler can't perform any robust optimisations. See this article from MSDN for a discussion on the subject. Here's an example where the assembly version is twice the speed as the C version (compiled with VS2K5):

    #include "stdafx.h"
    #include <windows.h>
    
    float KahanSum(const float *data, int n)
    {
       float sum = 0.0f, C = 0.0f, Y, T;
    
       for (int i = 0 ; i < n ; ++i) {
          Y = *data++ - C;
          T = sum + Y;
          C = T - sum - Y;
          sum = T;
       }
    
       return sum;
    }
    
    float AsmSum(const float *data, int n)
    {
      float result = 0.0f;
    
      _asm
      {
        mov esi,data
        mov ecx,n
        fldz
        fldz
    l1:
        fsubr [esi]
        add esi,4
        fld st(0)
        fadd st(0),st(2)
        fld st(0)
        fsub st(0),st(3)
        fsub st(0),st(2)
        fstp st(2)
        fstp st(2)
        loop l1
        fstp result
        fstp result
      }
    
      return result;
    }
    
    int main (int, char **)
    {
      int count = 1000000;
    
      float *source = new float [count];
    
      for (int i = 0 ; i < count ; ++i) {
        source [i] = static_cast <float> (rand ()) / static_cast <float> (RAND_MAX);
      }
    
      LARGE_INTEGER start, mid, end;
    
      float sum1 = 0.0f, sum2 = 0.0f;
    
      QueryPerformanceCounter (&start);
    
      sum1 = KahanSum (source, count);
    
      QueryPerformanceCounter (&mid);
    
      sum2 = AsmSum (source, count);
    
      QueryPerformanceCounter (&end);
    
      cout << "  C code: " << sum1 << " in " << (mid.QuadPart - start.QuadPart) << endl;
      cout << "asm code: " << sum2 << " in " << (end.QuadPart - mid.QuadPart) << endl;
    
      return 0;
    }
    

    And some numbers from my PC running a default release build*:

      C code: 500137 in 103884668
    asm code: 500137 in 52129147
    

    Out of interest, I swapped the loop with a dec/jnz and it made no difference to the timings - sometimes quicker, sometimes slower. I guess the memory limited aspect dwarfs other optimisations. (Editor's note: more likely the FP latency bottleneck is enough to hide the extra cost of loop. Doing two Kahan summations in parallel for the odd/even elements, and adding those at the end, could maybe speed this up by a factor of 2.)

    Whoops, I was running a slightly different version of the code and it outputted the numbers the wrong way round (i.e. C was faster!). Fixed and updated the results.

    0 讨论(0)
  • 2020-12-02 03:55

    One of the posibilities to the CP/M-86 version of PolyPascal (sibling to Turbo Pascal) was to replace the "use-bios-to-output-characters-to-the-screen" facility with a machine language routine which in essense was given the x, and y, and the string to put there.

    This allowed to update the screen much, much faster than before!

    There was room in the binary to embed machine code (a few hundred bytes) and there was other stuff there too, so it was essential to squeeze as much as possible.

    It turnes out that since the screen was 80x25 both coordinates could fit in a byte each, so both could fit in a two-byte word. This allowed to do the calculations needed in fewer bytes since a single add could manipulate both values simultaneously.

    To my knowledge there is no C compilers which can merge multiple values in a register, do SIMD instructions on them and split them out again later (and I don't think the machine instructions will be shorter anyway).

    0 讨论(0)
  • 2020-12-02 03:55

    Longpoke, there is just one limitation: time. When you don't have the resources to optimize every single change to code and spend your time allocating registers, optimize few spills away and what not, the compiler will win every single time. You do your modification to the code, recompile and measure. Repeat if necessary.

    Also, you can do a lot in the high-level side. Also, inspecting the resulting assembly may give the IMPRESSION that the code is crap, but in practice it will run faster than what you think would be quicker. Example:

    int y = data[i]; // do some stuff here.. call_function(y, ...);

    The compiler will read the data, push it to stack (spill) and later read from stack and pass as argument. Sounds shite? It might actually be very effective latency compensation and result in faster runtime.

    // optimized version call_function(data[i], ...); // not so optimized after all..

    The idea with the optimized version was, that we have reduced register pressure and avoid spilling. But in truth, the "shitty" version was faster!

    Looking at the assembly code, just looking at the instructions and concluding: more instructions, slower, would be a misjudgment.

    The thing here to pay attention is: many assembly experts think they know a lot, but know very little. The rules change from architecture to next, too. There is no silver-bullet x86 code, for example, which is always the fastest. These days is better to go by rules-of-thumb:

    • memory is slow
    • cache is fast
    • try to use cached better
    • how often you going to miss? do you have latency compensation strategy?
    • you can execute 10-100 ALU/FPU/SSE instructions for one single cache miss
    • application architecture is important..
    • .. but it does't help when the problem isn't in the architecture

    Also, trusting too much into compiler magically transforming poorly-thought-out C/C++ code into "theoretically optimum" code is wishful thinking. You have to know the compiler and tool chain you use if you care about "performance" at this low-level.

    Compilers in C/C++ are generally not very good at re-ordering sub-expressions because the functions have side effects, for starters. Functional languages don't suffer from this caveat but don't fit the current ecosystem that well. There are compiler options to allow relaxed precision rules which allow order of operations to be changed by the compiler/linker/code generator.

    This topic is a bit of a dead-end; for most it's not relevant, and the rest, they know what they are doing already anyway.

    It all boils down to this: "to understand what you are doing", it's a bit different from knowing what you are doing.

    0 讨论(0)
提交回复
热议问题