I'm working on a quite large project, a few years in the making, at a pretty large company, and I'm taking on the task of driving toward better overall code quality.
I was wondering what kind of metrics you would use to measure quality and complexity in this context. I'm not looking for absolute measures, but a series of items which could be improved over time. Given that this is a bit of a macro-operation across hundreds of projects (I've seen some questions asked about much smaller projects), I'm looking for something more automatable and holistic.
So far, I have a list that looks like this:
- Code coverage percentage during full-functional tests
- Recurrance of BVT failures
- Dependency graph/score, based on some tool like nDepend
- Number of build warnings
- Number of FxCop/StyleCop warnings found/supressed
- Number of "catch" statements
- Number of manual deployment steps
- Number of projects
- Percentage of code/projects that's "dead", as in, not referenced anywhere
- Number of WTF's during code reviews
- Total lines of code, maybe broken down by tier
You should organize your work around the six major software quality characteristics: functionality, reliability, usability, efficiency, maintainability, and portability. I've put a diagram online that describes these characteristics. Then, for each characteristic decide the most important metrics you want and are able to track. For example, some metrics, like those of Chidamber and Kemerer are suitable for object-oriented software, others, like cyclomatic complexity are more general-purpose.
Maybe you'll find interesting, or insightful, this analysis: A Tale of Four Kernels
Edit: schema, and the corresponding queries
Cyclomatic complexity is a decent "quality" metric. I'm sure developers could find a way to "game" it if it were the only metric, though! :)
And then there's the C.R.A.P. metric...
P.S. NDepend has about ten billion metrics, so that might be worth looking at. See also CodeMetrics for Reflector.
D'oh! I just noticed that you already mentioned NDepend.
Number of reported bugs would be interesting to track, too...
If your taking on the task of driving toward better overall code quality. You might take a look at:
- How many open issues do you currently have and how long do they take to resolve?
- What process to you have in place to gather requirements?
- Does your staff follow best practices?
- Do you have sop's defined to describing your companies programming methodology.
When you have a number of developers involved in a large project everyone has their way of programming. Each style of programming solve the problem but some answers may be less efficient than others.
How do you utlize you staff when attacking a new feature or fixing the exist code. Having developers work in teams following programming sop's forces everyone to be a better code.
When your people code more efficiently following rule you development time should get quicker.
You can get all the metrics you want but I say first you have to see how things are being done:
What are you development practices?
Without know how things are currently being done you can get all the metrics you want but you'll never see any improvemenet.
Amount of software cloning/duplicate code, less is obviously better. (Link discusses clones and various techniques to detect/measure them.)
来源:https://stackoverflow.com/questions/1353013/how-would-you-measure-code-quality-across-a-large-project