Performance anti patterns

℡╲_俬逩灬. 提交于 2019-12-03 11:55:27

问题


I am currently working for a client who are petrified of changing lousy un-testable and un-maintainable code because of "performance reasons". It is clear that there are many misconceptions running rife and reasons are not understood, but merely followed with blind faith.

One such anti-pattern I have come across is the need to mark as many classes as possible as sealed internal...

*RE-Edit: I see marking everything as sealed internal (in C#) as a premature optimisation.*

I am wondering what are some of the other performance anti-patterns people may be aware of or come across?


回答1:


The biggest performance anti-pattern I have come across is:

  • Not measuring performance before and after the changes.

Collecting performance data will show if a certain technique was successful or not. Not doing so will result in pretty useless activities, because someone has the "feeling" of increased performance when nothing at all has changed.




回答2:


The elephant in the room: Focusing on implementation-level micro-optimization instead of on better algorithms.




回答3:


Variable re-use.

I used to do this all the time figuring I was saving a few cycles on the declaration and lowering memory footprint. These savings were of minuscule value compared with how unruly it made the code to debug, especially if I ended up moving a code block around and the assumptions about starting values changed.




回答4:


Premature performance optimizations comes to mind. I tend to avoid performance optimizations at all costs and when I decide I do need them I pass the issue around to my collegues several rounds trying to make sure we put the obfu... eh optimization in the right place.




回答5:


One that I've run into was throwing hardware at seriously broken code, in an attempt to make it fast enough, sort of the converse of Jeff Atwood's article mentioned in Rulas' comment. I'm not talking about the difference between speeding up a sort that uses a basic, correct algorithm by running it on faster hardware vs. using an optimized algorithm. I'm talking about using a not obviously correct home brewed O(n^3) algorithm when a O(n log n) algorithm is in the standard library. There's also things like hand coding routines because the programmer doesn't know what's in the standard library. That one's very frustrating.




回答6:


Using design patterns just to have them used.




回答7:


  1. Using #defines instead of functions to avoid the penalty of a function call. I've seen code where expansions of defines turned out to generate huge and really slow code. Of course it was impossible to debug as well. Inline functions is the way to do this, but they should be used with care as well.

  2. I've seen code where independent tests has been converted into bits in a word that can be used in a switch statement. Switch can be really fast, but when people turn a series of independent tests into a bitmask and starts writing some 256 optimized special cases they'd better have a very good benchmark proving that this gives a performance gain. It's really a pain from maintenance point of view and treating the different tests independently makes the code much smaller which is also important for performance.




回答8:


Lack of clear program structure is the biggest code-sin of them all. Convoluted logic that is believed to be fast almost never is.




回答9:


Do not refactor or optimize while writing your code. It is extremely important not to try to optimize your code before you finish it.




回答10:


Julian Birch once told me:

"Yes but how many years of running the application does it actually take to make up for the time spent by developers doing it?"

He was referring to the cumulative amount of time saved during each transaction by an optimisation that would take a given amount of time to implement.

Wise words from the old sage... I often think of this advice when considering doing a funky optimisation. You can extend the same notion a little further by considering how much developer time is being spent dealing with the code in its present state versus how much time is saved by the users. You could even weight the time by hourly rate of the developer versus the user if you wanted.

Of course, sometimes its impossible to measure, for example, if an e-commerce application takes 1 second longer to respond you will loose some small % money from users getting bored during that 1 second. To make up that one second you need to implement and maintain optimised code. The optimisation impacts gross profit positively, and net profit negatively, so its much harder to balance. You could try - with good stats.




回答11:


Exploiting your programming language. Things like using exception handling instead of if/else just because in PLSnakish 1.4 it's faster. Guess what? Chances are it's not faster at all and that two years from now someone maintaining your code will get really angry with you because you obfuscated the code and made it run much slower, because in PLSnakish 1.8 the language maintainers fixed the problem and now if/else is 10 times faster than using exception handling tricks. Work with your programming language and framework!




回答12:


Changing more than one variable at a time. This drives me absolutely bonkers! How can you determine the impact of a change on a system when more than one thing's been changed?

Related to this, making changes that are not warranted by observations. Why add faster/more CPUs if the process isn't CPU bound?




回答13:


General solutions.

Just because a given pattern/technology performs better in one circumstance does not mean it does in another.

StringBuilder overuse in .Net is a frequent example of this one.




回答14:


Once I had a former client call me asking for any advice I had on speeding up their apps.

He seemed to expect me to say things like "check X, then check Y, then check Z", in other words, to provide expert guesses.

I replied that you have to diagnose the problem. My guesses might be wrong less often than someone else's, but they would still be wrong, and therefore disappointing.

I don't think he understood.




回答15:


Some developers believe a fast-but-incorrect solution is sometimes preferable to a slow-but-correct one. So they will ignore various boundary conditions or situations that "will never happen" or "won't matter" in production.

This is never a good idea. Solutions always need to be "correct".

You may need to adjust your definition of "correct" depending upon the situation. What is important is that you know/define exactly what you want the result to be for any condition, and that the code gives those results.




回答16:


Michael A Jackson gives two rules for optimizing performance:

  1. Don't do it.
  2. (experts only) Don't do it yet.

If people are worried about performance, tell 'em to make it real - what is good performance and how do you test for it? Then if your code doesn't perform up to their standards, at least it's something the code writer and the application user agree on.

If people are worried about non-performance costs of rewriting ossified code (for example, the time sink) then present your estimates and demonstrate that it can be done in the schedule. Assuming it can.




回答17:


I believe it is a common myth that super lean code "close to the metal" is more performant than an elegant domain model.

This was apparently de-bunked by the creator/lead developer of DirectX, who re-wrote the c++ version in C# with massive improvements. [source required]




回答18:


Appending to an array using (for example) push_back() in C++ STL, ~= in D, etc. when you know how big the array is supposed to be ahead of time and can pre-allocate it.



来源:https://stackoverflow.com/questions/425612/performance-anti-patterns

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!