What code metric(s) convince you that provided code is “crappy”? [closed]

≯℡__Kan透↙ 提交于 2019-11-28 16:07:05
VonC

No metrics regarding coding-style are part of such a warning.

For me it is about static analysis of the code, which can truly be 'on' all the time:

I would put coverage test in a second step, as such tests can take time.


Do not forget that "crappy" code are not detected by metrics, but by the combination and evolution (as in "trend) of metrics: see the What is the fascination with code metrics? question.

That means you do not have just to recommend code metrics to "automatically identify "crappy code"", but you also have to recommend the right combination and trend analysis to go along those metrics.


On a sidenote, I do share your frustration ;), and I do not share the point of view of tloach (in the comments of another answers) "Ask a vague question, get a vague answer" he says... your question deserve a specific answer.

Tom Ritter

Not an automated solution, but I find WTF's per minute useful.


(source: osnews.com)

Number of warnings the compiler spits out when I do a build.

Number of commented out lines per line of production code. Generally it indicates a sloppy programmer that doesn't understand version control.

Developers are always concerned with metrics being used against them and calling "crappy" code is not a good start. This is important because if you are worried about your developers gaming around them then don't use the metrics for anything that is to their advantage/disadvantage.

The way this works best is don't let the metric tell you where the code is crappy but use the metric to determine where you need to look. You look by having a code review and the decision of how to fix the issue is between the developer and the reviewer. I would also error on the side of the developer against the metric. If the code is still popping on the metric but the reviewers think it is good, leave it alone.

But it is important to keep in mind this gaming effect when your metrics start to improve. Great, I now have 100% coverage but are the unit tests any good? The metric tells me I am ok, but I still need to check it out and look at what got us there.

Bottom line, the human trumps the machine.

number of global variables.

  • Non-existent tests (revealed by code coverage). It's not necessarily an indicator that the code is bad, but it's a big warning sign.

  • Profanity in comments.

Metrics alone do not identify crappy code. However they can identify suspicious code.

There are a lot of metrics for OO software. Some of them can be very useful:

  • Average method size (both in LOC/Statements or complexity). Large methods can be a sign of bad design.
  • Number of methods overridden by a subclass. A large number indicates bad class design.
  • Specialization index (number of overridden methods * nesting level / total number of methods). High numbers indicate possible problems in the class diagram.

There are a lot more viable metrics, and they can be calculated using tools. This can be a nice help in identifying crappy code.

  • global variables
  • magic numbers
  • code/comment ratio
  • heavy coupling (for example, in C++ you can measure this by looking at class relations or number of cpp/header files that cross-include each other
  • const_cast or other types of casting within the same code-base (not w/ external libs)
  • large portions of code commented-out and left in there

My personal favourite warning flag: comment free code. Usually means the coder hasn't stopped to think about it; plus it automatically makes it hard to understand, so ups the crappy ratio.

At first sight: cargo cult application of code idioms.

As soon as I have a closer look: obvious bugs and misconceptions by the programmer.

Dandikas

My bet: combination of cyclomatic complexity(CC) and code coverage from automated tests(TC).

CC | TC

 2 | 0%  - good anyway, cyclomatic complexity too small

10 | 70% - good

10 | 50% - could be better

10 | 20% - bad

20 | 85% - good

20 | 70% - could be better

20 | 50% - bad

...

crap4j - possible tool (for java) and concept explanation ... in search for C# friendly tool :(

Number of worthless comments to meaningful comments:

'Set i to 1'
Dim i as Integer = 1

I don't believe there is any such metric. With the exception of code that actually doesn't do what it's supposed to (which is a whole extra level of crappiness) 'crappy' code means code that is hard to maintain. That usually means it's hard for the maintainer to understand, which is always to some extent a subjective thing, just like bad writing. Of course there are cases where everyone agrees the writing (or the code) is crappy, but it's very hard to write a metric for it.

Plus everything is relative. Code doing a highly complex function, in minimal memory, optimized for every last cycle of speed, will look very bad compared with a simple function under no restrictions. But it's not crappy - it's just doing what it has to.

Unfortunately there is not a metric that I know of. Something to keep in mind is no matter what you choose the programmers will game the system to make their code look good. I have seen that everywhere any kind of "automatic" metric is put into place.

A lot of conversions to and from strings. Generally it's a sign that the developer isn't clear about what's going on and is merely trying random things until something works. For example, I've often seen code like this:

   object num = GetABoxedInt();
//   long myLong = (long) num;   // throws exception
   long myLong = Int64.Parse(num.ToString());

when what they really wanted was:

   long myLong = (long)(int)num;

I am surprised no one has mentioned crap4j.

  • Watch out for ratio of Pattern classes vs. standard classes. A high ratio would indicate Patternitis
  • Check for magic numbers not defined as constants
  • Use a pattern matching utility to detect potentially duplicated code

Sometimes, you just know it when you see it. For example, this morning I saw:

void mdLicense::SetWindows(bool Option) {
  _windows = (Option ? true: false);
}

I just had to ask myself 'why would anyone ever do this?'.

Code coverage has some value, but otherwise I tend to rely more on code profiling to tell if the code is crappy.

Ratio of comments that include profanity to comments that don't.

Higher = better code.

Lines of comments / Lines of code

value > 1 -> bad (too many comments)

value < 0.1 -> bad (not enough comments)

Adjust numeric values according to your own experience ;-)

I take a multi-tiered approach with the first tier being reasonable readability offset only by the complexity of the problem being solved. If it can't pass the readability test I usually consider the code less than good.

TODO: comments in production code. Simply shows that the developer does not execute tasks to completion.

Methods with 30 arguments. On a web service. That is all.

Well, there are various different ways you could use to point out whether or not a code is a good code. Following are some of those:

  1. Cohesiveness: Well, the block of code, whether class or a method, if found to be serving multiple functionality, then the code can be found to be lower in cohesiveness. The code lower in cohesiveness can be termed as low in re-usability. This can further be termed as code lower in maintainability.

    1. Code complexity: One can use McCabe cyclomatic complexity (no. of decision points) to determine the code complexity. The code complexity being high can be used to represent code with less usability (difficult to read & understand).

    2. Documentation: Code with not enough document can also attribute to low software quality from the perspective of usability of the code.

Check out following page to read about checklist for code review.

This hilarious blog post on The Code C.R.A.P Metric could be useful.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!