Coding Practices which enable the compiler/optimizer to make a faster program

后端 未结 30 1782
一个人的身影
一个人的身影 2020-12-02 03:24

Many years ago, C compilers were not particularly smart. As a workaround K&R invented the register keyword, to hint to the compiler, that maybe it woul

相关标签:
30条回答
  • 2020-12-02 04:21

    Attempt to program using static single assignment as much as possible. SSA is exactly the same as what you end up with in most functional programming languages, and that's what most compilers convert your code to to do their optimizations because it's easier to work with. By doing this places where the compiler might get confused are brought to light. It also makes all but the worst register allocators work as good as the best register allocators, and allows you to debug more easily because you almost never have to wonder where a variable got it's value from as there was only one place it was assigned.
    Avoid global variables.

    When working with data by reference or pointer pull that into local variables, do your work, and then copy it back. (unless you have a good reason not to)

    Make use of the almost free comparison against 0 that most processors give you when doing math or logic operations. You almost always get a flag for ==0 and <0, from which you can easily get 3 conditions:

    x= f();
    if(!x){
       a();
    } else if (x<0){
       b();
    } else {
       c();
    }
    

    is almost always cheaper than testing for other constants.

    Another trick is to use subtraction to eliminate one compare in range testing.

    #define FOO_MIN 8
    #define FOO_MAX 199
    int good_foo(int foo) {
        unsigned int bar = foo-FOO_MIN;
        int rc = ((FOO_MAX-FOO_MIN) < bar) ? 1 : 0;
        return rc;
    } 
    

    This can very often avoid a jump in languages that do short circuiting on boolean expressions and avoids the compiler having to try to figure out how to handle keeping up with the result of the first comparison while doing the second and then combining them. This may look like it has the potential to use up an extra register, but it almost never does. Often you don't need foo anymore anyway, and if you do rc isn't used yet so it can go there.

    When using the string functions in c (strcpy, memcpy, ...) remember what they return -- the destination! You can often get better code by 'forgetting' your copy of the pointer to destination and just grab it back from the return of these functions.

    Never overlook the oppurtunity to return exactly the same thing the last function you called returned. Compilers are not so great at picking up that:

    foo_t * make_foo(int a, int b, int c) {
            foo_t * x = malloc(sizeof(foo));
            if (!x) {
                 // return NULL;
                 return x; // x is NULL, already in the register used for returns, so duh
            }
            x->a= a;
            x->b = b;
            x->c = c;
            return x;
    }
    

    Of course, you could reverse the logic on that if and only have one return point.

    (tricks I recalled later)

    Declaring functions as static when you can is always a good idea. If the compiler can prove to itself that it has accounted for every caller of a particular function then it can break the calling conventions for that function in the name of optimization. Compilers can often avoid moving parameters into registers or stack positions that called functions usually expect their parameters to be in (it has to deviate in both the called function and the location of all callers to do this). The compiler can also often take advantage of knowing what memory and registers the called function will need and avoid generating code to preserve variable values that are in registers or memory locations that the called function doesn't disturb. This works particularly well when there are few calls to a function. This gets much of the benifit of inlining code, but without actually inlining.

    0 讨论(0)
  • 2020-12-02 04:22

    I have long suspected, but never proved that declaring arrays so that they hold a power of 2, as the number of elements, enables the optimizer to do a strength reduction by replacing a multiply by a shift by a number of bits, when looking up individual elements.

    0 讨论(0)
  • 2020-12-02 04:23

    Here's my second piece of optimisation advice. As with my first piece of advice this is general purpose, not language or processor specific.

    Read the compiler manual thoroughly and understand what it is telling you. Use the compiler to its utmost.

    I agree with one or two of the other respondents who have identified selecting the right algorithm as critical to squeezing performance out of a program. Beyond that the rate of return (measured in code execution improvement) on the time you invest in using the compiler is far higher than the rate of return in tweaking the code.

    Yes, compiler writers are not from a race of coding giants and compilers contain mistakes and what should, according to the manual and according to compiler theory, make things faster sometimes makes things slower. That's why you have to take one step at a time and measure before- and after-tweak performance.

    And yes, ultimately, you might be faced with a combinatorial explosion of compiler flags so you need to have a script or two to run make with various compiler flags, queue the jobs on the large cluster and gather the run time statistics. If it's just you and Visual Studio on a PC you will run out of interest long before you have tried enough combinations of enough compiler flags.

    Regards

    Mark

    When I first pick up a piece of code I can usually get a factor of 1.4 -- 2.0 times more performance (ie the new version of the code runs in 1/1.4 or 1/2 of the time of the old version) within a day or two by fiddling with compiler flags. Granted, that may be a comment on the lack of compiler savvy among the scientists who originate much of the code I work on, rather than a symptom of my excellence. Having set the compiler flags to max (and it's rarely just -O3) it can take months of hard work to get another factor of 1.05 or 1.1

    0 讨论(0)
  • 2020-12-02 04:24

    Two coding technics I didn't saw in the above list:

    Bypass linker by writing code as an unique source

    While separate compilation is really nice for compiling time, it is very bad when you speak of optimization. Basically the compiler can't optimize beyond compilation unit, that is linker reserved domain.

    But if you design well your program you can can also compile it through an unique common source. That is instead of compiling unit1.c and unit2.c then link both objects, compile all.c that merely #include unit1.c and unit2.c. Thus you will benefit from all the compiler optimizations.

    It's very like writing headers only programs in C++ (and even easier to do in C).

    This technique is easy enough if you write your program to enable it from the beginning, but you must also be aware it change part of C semantic and you can meet some problems like static variables or macro collision. For most programs it's easy enough to overcome the small problems that occurs. Also be aware that compiling as an unique source is way slower and may takes huge amount of memory (usually not a problem with modern systems).

    Using this simple technique I happened to make some programs I wrote ten times faster!

    Like the register keyword, this trick could also become obsolete soon. Optimizing through linker begin to be supported by compilers gcc: Link time optimization.

    Separate atomic tasks in loops

    This one is more tricky. It's about interaction between algorithm design and the way optimizer manage cache and register allocation. Quite often programs have to loop over some data structure and for each item perform some actions. Quite often the actions performed can be splitted between two logically independent tasks. If that is the case you can write exactly the same program with two loops on the same boundary performing exactly one task. In some case writing it this way can be faster than the unique loop (details are more complex, but an explanation can be that with the simple task case all variables can be kept in processor registers and with the more complex one it's not possible and some registers must be written to memory and read back later and the cost is higher than additional flow control).

    Be careful with this one (profile performances using this trick or not) as like using register it may as well give lesser performances than improved ones.

    0 讨论(0)
  • 2020-12-02 04:24

    Most modern compilers should do a good job speeding up tail recursion, because the function calls can be optimized out.

    Example:

    int fac2(int x, int cur) {
      if (x == 1) return cur;
      return fac2(x - 1, cur * x); 
    }
    int fac(int x) {
      return fac2(x, 1);
    }
    

    Of course this example doesn't have any bounds checking.

    Late Edit

    While I have no direct knowledge of the code; it seems clear that the requirements of using CTEs on SQL Server were specifically designed so that it can optimize via tail-end recursion.

    0 讨论(0)
  • 2020-12-02 04:24
    1. Use the most local scope possible for all variable declarations.

    2. Use const whenever possible

    3. Dont use register unless you plan to profile both with and without it

    The first 2 of these, especially #1 one help the optimizer analyze the code. It will especially help it to make good choices about what variables to keep in registers.

    Blindly using the register keyword is as likely to help as hurt your optimization, It's just too hard to know what will matter until you look at the assembly output or profile.

    There are other things that matter to getting good performance out of code; designing your data structures to maximize cache coherency for instance. But the question was about the optimizer.

    0 讨论(0)
提交回复
热议问题