Why are compilers so stupid?

前端 未结 29 1770
借酒劲吻你
借酒劲吻你 2020-11-29 18:07

I always wonder why compilers can\'t figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit

相关标签:
29条回答
  • 2020-11-29 18:57

    Oh, I don't know. Sometimes compilers are pretty smart. Consider the following C program:

    #include <stdio.h>  /* printf() */
    
    int factorial(int n) {
       return n == 0 ? 1 : n * factorial(n - 1);
    }
    
    int main() {
       int n = 10;
    
       printf("factorial(%d) = %d\n", n, factorial(n));
    
       return 0;
    }
    

    On my version of GCC (4.3.2 on Debian testing), when compiled with no optimizations, or -O1, it generates code for factorial() like you'd expect, using a recursive call to compute the value. But on -O2, it does something interesting: It compiles down to a tight loop:

        factorial:
       .LFB13:
               testl   %edi, %edi
               movl    $1, %eax
               je  .L3
               .p2align 4,,10
               .p2align 3
       .L4:
               imull   %edi, %eax
               subl    $1, %edi
               jne .L4
       .L3:
               rep
               ret
    

    Pretty impressive. The recursive call (not even tail-recursive) has been completely eliminated, so factorial now uses O(1) stack space instead of O(N). And although I have only very superficial knowledge of x86 assembly (actually AMD64 in this case, but I don't think any of the AMD64 extensions are being used above), I doubt that you could write a better version by hand. But what really blew my mind was the code that it generated on -O3. The implementation of factorial stayed the same. But main() changed:

        main:
       .LFB14:
               subq    $8, %rsp
       .LCFI0:
               movl    $3628800, %edx
               movl    $10, %esi
               movl    $.LC0, %edi
               xorl    %eax, %eax
               call    printf
               xorl    %eax, %eax
               addq    $8, %rsp
               ret
    

    See the movl $3628800, %edx line? gcc is pre-computing factorial(10) at compile-time. It doesn't even call factorial(). Incredible. My hat is off to the GCC development team.

    Of course, all the usual disclaimers apply, this is just a toy example, premature optimization is the root of all evil, etc, etc, but it illustrates that compilers are often smarter than you think. If you think you can do a better job by hand, you're almost certainly wrong.

    (Adapted from a posting on my blog.)

    0 讨论(0)
  • 2020-11-29 18:57

    I think you are underestimating how much work it is to make sure that one piece of code doesn't affect another piece of code. With just a small change to your examples x, i, and s could all point to the same memory. Once one of the variables is a pointer, it is much harder to tell what code might have side effects depending on is point to what.

    Also, I think people who program compliers would rather spend time making optimizations that aren't as easy for humans to do.

    0 讨论(0)
  • 2020-11-29 18:57

    In release mode VS 2010 C++ this doesnt take any time to run. However debug mode is another story.

    #include <stdio.h>
    int main()
    {
        int x = 0;
        for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
            x += x + x + x + x + x;
        }
        printf("%d", x);
    }
    
    0 讨论(0)
  • 2020-11-29 18:58

    As others have addressed the first part of your question adequately, I'll try to tackle the second part, i.e. "automatically uses StringBuilder instead".

    There are several good reasons for not doing what you're suggesting, but the biggest factor in practice is likely that the optimizer runs long after the actual source code has been digested & forgotten about. Optimizers generally operate either on the generated byte code (or assembly, three address code, machine code, etc.), or on the abstract syntax trees that result from parsing the code. Optimizers generally know nothing of the runtime libraries (or any libraries at all), and instead operate at the instruction level (that is, low level control flow and register allocation).

    Second, as libraries evolve (esp. in Java) much faster than languages, keeping up with them and knowing what deprecates what and what other library component might be better suited to the task would be a herculean task. Also likely an impossible one, as this proposed optimizer would have to precisely understand both your intent and the intent of each available library component, and somehow find a mapping between them.

    Finally, as others have said (I think), the compiler/optimizer writer can reasonably assume that the programmer writing the input code is not brain-dead. It would be a waste of time to devote significant effort to asinine special cases like these when other, more general optimizations abound. Also, as others have also mentioned, seemingly brain-dead code can have an actual purpose (a spin lock, busy wait prior to a system-level yield, etc.), and the compiler has to honor what the programmer asks for (if it's syntactically and semantically valid).

    0 讨论(0)
  • 2020-11-29 18:58

    Because compiler writers try add optimizations for things that matter (I hope) and that are measured in *Stone benchmarks (I fear).

    There are zillions of other possible code fragments like yours, which do nothing and could be optimized with increasing effort on the compiler writer, but which are hardly ever encountered.

    What I feel embarrassing is that even today most compilers generate code to check for the switchValue being greater than 255 for a dense or almost full switch on an unsigned character. That adds 2 instructions to most bytecode interpreter's inner loop.

    0 讨论(0)
提交回复
热议问题