I was changing my for loop to increment using ++i
instead of i++
and got to thinking, is this really necessary anymore? Surely today\'s compilers
In general, no. Compilers are much better at doing small, straightforward micro-optimizations like this across your entire code base. Ensure that you are enabling your compiler here, by compiling your release version with the right optimization flags. If you use Visual Studio, you might want to experiment with favoring size over speed (there are a lot of cases where small code is faster), link-time code generation (LTCG, which enables the compiler to do cross-compiland optimizations), and maybe even profile-guided optimization.
You also need to remember that the vast bulk of your code won't matter from a performance perspective - optimizing this code will have no user visible effect.
You need to define your performance goals early on and measure frequently to make sure you're meeting them. When outside of your goals, use tools such as profilers to determine where the hot spots are in your code and optimize those.
As another poster here mentioned, "optimization without measuring and understanding isn't optimization at all - its just random change."
If you have measured and determined that a particular function or loop is a hotspot, there are two approaches to optimize it:
Certainly yes, because the compiler needs more resources to optimize not optimized code than to optimize something already optimized. In particular, it causes the computer to consume a little bit more energy, that, despite being small, still causes bad impact on the already hurt nature. This is especially important for open-source code, which is compiled more often than closed source.
Go green, save the planet, optimize yourself
I wanted to add a little something. This "premature optimization is bad" is kind of rubbish. What do you do when you select a algorithm? You probably take the one with the best time complexity - OMG premature optimization. Yet everyone seems fine with this. So it seems like the real attitude is "premature optimization is bad - unless you do it my way" At the end of the day do whatever you need to do to make the app you need to make.
"The programmer shall left shift by one instead of multiplying by 2". hope you dont want to multiply floats or negative numbers then ;)
I don't generally optimize lower than the O(f(n)) complexity unless I'm writing on an embedded device.
For typical g++/Visual Studio work, I presume that the basic optimizations are going to be reliably made(at least when optimization is requested). For less mature compilers, that assumption is presumably not valid.
If I was doing heavy maths work on streams of data, I'd check the compiler's ability to emit SIMD instructions.
I'd rather tune my code around different algorithms than a specific version of a specific compiler. Algorithms will stand the test of multiple processors/compilers, whereas if you tune for the 2008 Visual C++(first release) version, your optimizations may not even work next year.
Certain optimization tricks that are very reasonable in older computers prove to have issues today. E.g., the ++/++ operators were designed around an older architecture that had an increment instruction that was very fast. Today, if you do something like
for(int i = 0; i < top; i+=1)
I would presume that the compiler would optimize i+=1
into an inc
instruction(if the CPU had it).
The classic advice is to optimize top-down.
I still do things like ra<<=1; instead of ra*=2; And will continue to. But the compilers (as bad as they are) and more importantly the speed of the computers is so fast that these optimizations are often lost in the noise. As a general question, no it is not worth it, if you are specifically on a resource limited platform (say a microcontroller) where every extra clock really counts then you probably already do this and probably do a fair amount of assembler tuning. As a habit I try not to give the compiler too much extra work, but for code readability and reliability I dont go out of my way.
The bottom line for performance though has never changed. Find some way to time your code, measure to find the low hanging fruit and fix it. Mike D. hit the nail on the head in his response. I have too many times seen people worry about specific lines of code not realizing that they are either using a poor compiler or by changing one compiler option they could see several times increase in execution performance.
Bad example – the decision whether to use ++i
or i++
doesn't involve any kind of trade-off! ++i
has (may have) a net benefit without any downsides. There are many similar scenarios and any discussion in these realms are a waste of time.
That said, I believe it's very important to know to what extent the target compiler is capable of optimizing small code fragments. The truth is: modern compilers are (sometimes surprisingly!) good at it. Jason has an incredible story concerning an optimized (non-tail recursive) factorial function.
On the other hand, compilers can be surprisingly stupid as well. The key is that many optimizations require a control flow analysis which becomes NP complete. Ever optimization thus becomes a trade-off between compilation time and usefulness. Often, the locality of an optimization plays a crucial role because the computation time required to perform the optimization increases just too much when the code size regarded by the compiler increases by just a few statements.
And as others have said, these minute details are still relevant and always will be (for the forseeable future). Although compilers get smarter all the time and machines get faster, so does the size of our data grow – in fact, we're losing this particular battle; in many fields, the amount of data grows much faster than computers get better.