Code coverage is propably the most controversial code metric. Some say, you have to reach 80% code coverage, other say, it\'s superficial and does not say anything about your te
I like revenue, sales numbers, profit. They are pretty good metrics of a code base.
As a rule of thumb, defect injection rates proportionally trail code yield and they both typically follow a Rayleigh distribution curve.
At some point your defect detection rate will peak and then start to diminish.
This apex represents 40% of discovered defects.
Moving forward with simple regression analysis you can estimate how many defects remain in your product at any point following the peak.
This is one component of Lawrence Putnam's model.
Scenario coverage.
I don't think you really want to have 100% code coverage. Testing say, simple getters and setters looks like a waste of time.
The code always runs in some context, so you may list as many scenarios as you can (depending on the problem complexity sometimes even all of them) and test them.
Example:
// parses a line from .ini configuration file
// e.g. in the form of name=value1,value2
List parseConfig(string setting)
{
(name, values) = split_string_to_name_and_values(setting, '=')
values_list = split_values(values, ',')
return values_list
}
Now, you have many scenarios to test. Some of them:
Passing correct value
List item
Passing null
Passing empty string
Passing ill-formated parameter
Passing string with with leading or ending comma e.g. name=value1, or name=,value2
Running just first test may give you (depending on the code) 100% code coverage. But you haven't considered all the posibilities, so that metric by itself doesn't tell you much.
What about watching the trend of code coverage during your project?
As it is the case with many other metrics a single number does not say very much.
For example it is hard to tell wether there is a problem if "we have a Checkstyle rules compliance of 78.765432%". If yesterday's compliance was 100%, we are definitely in trouble. If it was 50% yesterday, we are probably doing a good job.
I alway get nervous when code coverage has gotten lower and lower over time. There are cases when this is okay, so you cannot turn off your head when looking at charts and numbers.
BTW, sonar (http://sonar.codehaus.org/) is a great tool for watching trends.
How about (lines of code)/(number of test cases)? Not extremely meaningful (since it depends on LOC), but at least it's easy to calculate.
Another one could be (number of test cases)/(number of methods).
I wrote a blog post about why High Test Coverage Ratio is a Good Thing Anyway.
I agree that: when a portion of code is executed by tests, it doesn’t mean that the validity of the results produced by this portion of code is verified by tests.
But still, if you are heavily using contracts to check states validity during tests execution, high test coverage will mean a lot of verification anyway.