Why are compilers so stupid?

前端 未结 29 1772
借酒劲吻你
借酒劲吻你 2020-11-29 18:07

I always wonder why compilers can\'t figure out simple things that are obvious to the human eye. They do lots of simple optimizations, but never something even a little bit

相关标签:
29条回答
  • 2020-11-29 18:43

    Premise: I studied compilers at university.

    The javac compiler is extremely stupid and performs absolutely no optimization because it relies on the java runtime to do them. The runtime will catch that thing and optimize it, but it will catch it only after the function is executed a few thousand times.

    If you use a better compiler (like gcc) enabling optimizations, it will optimize your code, because it's quite an obvious optimization to do.

    0 讨论(0)
  • 2020-11-29 18:45

    Compilers are designed to be predictable. This may make them look stupid from time to time, but that's OK. The compiler writer's goals are

    • You should be able to look at your code and make reasonable predictions about its performance.

    • Small changes in the code should not result in dramatic differences in performance.

    • If a small change looks to the programmer like it should improve performance, it should at least not degrade performance (unless surprising things are happening in the hardware).

    All these criteria militate against "magic" optimizations that apply only to corner cases.


    Both of your examples have a variable updated in a loop but not used elsewhere. This case is actually quite difficult to pick up unless you are using some sort of framework that can combine dead-code elimination with other optimizations like copy propagation or constant propagation. To a simple dataflow optimizer the variable doesn't look dead. To understand why this problem is hard, see the paper by Lerner, Grove, and Chambers in POPL 2002, which uses this very example and explains why it is hard.

    0 讨论(0)
  • 2020-11-29 18:45

    Well, I can only speak of C++, because I'm a Java beginner totally. In C++, compilers are free to disregard any language requirements placed by the Standard, as long as the observable behavior is as-if the compiler actually emulated all the rules that are placed by the Standard. Observable behavior is defined as any reads and writes to volatile data and calls to library functions. Consider this:

    extern int x; // defined elsewhere
    for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
        x += x + x + x + x + x;
    }
    return x;
    

    The C++ compiler is allowed to optimize out that piece of code and just add the proper value to x that would result from that loop once, because the code behaves as-if the loop never happened, and no volatile data, nor library functions are involved that could cause side effects needed. Now consider volatile variables:

    extern volatile int x; // defined elsewhere
    for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
        x += x + x + x + x + x;
    }
    return x;
    

    The compiler is not allowed to do the same optimization anymore, because it can't prove that side effects caused by writing to x could not affect the observable behavior of the program. After all, x could be set to a memory cell watched by some hardware device that would trigger at every write.


    Speaking of Java, I have tested your loop, and it happens that the GNU Java Compiler (gcj) takes in inordinate amount of time to finish your loop (it simply didn't finish and I killed it). I enabled optimization flags (-O2) and it happened it printed out 0 immediately:

    [js@HOST2 java]$ gcj --main=Optimize -O2 Optimize.java
    [js@HOST2 java]$ ./a.out
    0
    [js@HOST2 java]$
    

    Maybe that observation could be helpful in this thread? Why does it happen to be so fast for gcj? Well, one reason surely is that gcj compiles into machine code, and so it has no possibility to optimize that code based on runtime behavior of the code. It takes all its strongness together and tries to optimize as much as it can at compile time. A virtual machine, however, can compile code Just in Time, as this output of java shows for this code:

    class Optimize {
        private static int doIt() {
            int x = 0;
            for (int i = 0; i < 100 * 1000 * 1000 * 1000; ++i) {
                x += x + x + x + x + x;
            }
            return x;
        }
        public static void main(String[] args) {
            for(int i=0;i<5;i++) {
                doIt();
            }
        }
    }
    

    Output for java -XX:+PrintCompilation Optimize:

    1       java.lang.String::hashCode (60 bytes)
    1%      Optimize::doIt @ 4 (30 bytes)
    2       Optimize::doIt (30 bytes)
    

    As we see, it JIT compiles the doIt function 2 times. Based on the observation of the first execution, it compiles it a second time. But it happens to have the same size as bytecode two times, suggesting the loop is still in place.

    As another programmer shows, execution time for certain dead loops even is increased for some cases for subsequently compiled code. He reported a bug which can be read here, and is as of 24. October 2008.

    0 讨论(0)
  • 2020-11-29 18:46

    Seriously? Why would anyone ever write real-world code like that? IMHO, the code, not the compiler is the "stupid" entity here. I for one am perfectly happy that compiler writers don't bother wasting their time trying to optimize something like that.

    Edit/Clarification: I know the code in the question is meant as an example, but that just proves my point: you either have to be trying, or be fairly clueless to write supremely inefficient code like that. It's not the compiler's job to hold our hand so we don't write horrible code. It is our responsibility as the people that write the code to know enough about our tools to write efficiently and clearly.

    0 讨论(0)
  • 2020-11-29 18:46

    Compilers in general are very smart.

    What you must consider is that they must account for every possibly exception or situation where optimizing or re-factoring code could cause unwanted side-effects.

    Things like, threaded programs, pointer aliasing, dynamically linked code and side effects (system calls/memory alloc) etc. make formally prooving refactoring very difficult.

    Even though your example is simple, there still may be difficult situations to consider.

    As for your StringBuilder argument, that is NOT a compilers job to choose which data structures to use for you.

    If you want more powerful optimisations move to a more strongly typed language like fortran or haskell, where the compilers are given much more information to work with.

    Most courses teaching compilers/optimisation (even acedemically) give a sense of appreciation about how making gerneral formally prooven optimisatons rather than hacking specific cases is a very difficult problem.

    0 讨论(0)
  • 2020-11-29 18:46

    It forces you (the programmer) to think about what you're writing. Forcing compilers to do your work for you doesn't help anyone: it makes the compilers much more complex (and slower!), and it makes you stupider and less attentive to your code.

    0 讨论(0)
提交回复
热议问题