问题
The article at onjava seems to imply that basis path coverage is a sufficient substitute for full path coverage, due to some linear-independence/cyclomatic-complexity magic.
Using an example similar to the article:
public int returnInput(int x, boolean one, boolean two)
{
int y = x;
if(one)
{
y = x-1;
}
if(two)
{
x = y;
}
return x;
}
with the basis set {FF,TF,FT}, the bug is not exposed. Only the untested TT path would expose it.
So, how is basis path coverage useful? It doesn't seem much better than branch coverage.
回答1:
[Disclaimer: I've never heard of this technique before, it just looks interesting so I've done a few searches and here's what I think I've found out. Hopefully someone who knows what they're talking about will contribute too...]
I think it's supposed to be a better way of generating branch coverage tests, not a complete substitute for path coverage. There's a far longer document here which restates the goals a bit: http://www.westfallteam.com/sites/default/files/papers/Basis_Path_Testing_Paper.pdf
The onjava article says "the goal of basis path testing is to test all decision outcomes independently of one another. Testing the four basis paths achieves this goal, making the other paths extraneous"
I think "extraneous" here means, "unnecessary to the goal of basis path testing", not as one might assume, "a complete waste of everyone's time".
I think the point of testing branches independently, is to break those accidental correlations between the paths which work, and the paths you test, that occur with terrifying frequency when I write both the code and an arbitrary set of branch coverage tests myself. There's no magic in the linear independence, it's just a systematic way of generating branch coverage, which discourages the tester from making the same assumptions as the programmer about correlation between branch choices.
So you're right, basis path testing misses your bug, and in general misses 2^(N-1)-N bugs, where N is the cyclomatic complexity. It just aims not to miss the 2^(N-1)-N paths most likely to be buggy, as letting the coder choose N paths to test typically does ;-)
回答2:
path coverage is no better than any other coverage metrics - it is just that metrics that shows how much of 'code' has been tested. The fact that you can achieve 100% branch coverage with (TF,FT) set of TCs as well as (TT,FF) means it is all up to your luck if your exit criteria tell you exit after 100% coverage is done.
The coverage should not be a focus for the tester - finding bugs should be and TC is just a way to show the bug just as well as coverage a proxy showing how much of this showing the bug activity has been done. As with all other white box methods - striving for max coverage with minimum costs require actually understanding the code so that you could actually write a defect w/o a TC. TC is just good for regression and as a documentation of the defect.
As a tester coverage is just a hint on how much has been done - only experience can be really helpful as to say how much is enough. As this is difficult to present in numerical values we use other methods i.e. coverage statistics.
Not sure if this makes sense to you I guess judging on the date you are far gone since the date you publish your question...
回答3:
My recollection from McCabe's work on this exact subject is: you generate the basis paths systematically, changing one condition at a time, and only changing the last condition, until you can't change any new conditions.
Suppose we start with FF, which is the shortest path. Following the algorithm, we change the last if in the chain, yielding FT. We've covered the second if now, meaning: if there was a bug in the second if, surely our two tests were paying attention to what happened when the second if statement suddenly started executing, otherwise our tests aren't working or our code isn't verifiable. Both possibilities suggest our code needs reworking.
Having covered FT, we go back up one node in the path and change the first T to F. When building basis paths, we only change one condition at a time. So we are forced to leave the second if the same, yielding... TT!
We are left with these basis paths: {FF, FT, TT}. Which address the issue you raised.
But wait, you say, what if the bug occurs in the TF case?? The answer is: we should have already noticed it between two of the other three tests. Think about it:
- The second if already had its chance to demonstrate its effect on the code independently any other changes to the execution of the program through the FF and FT tests.
- The first if had its chance to demonstrate its independent effect going from FT to TT.
We could have started with the TT case (the longest path). We would have arrived at slightly different basis paths, but they would still exercise each if statement independently.
Notice in your simple example, there is no co-linearity in the conditions of the if statements. Co-linearity cripples basis path generation.
In short: basis path testing, done systematically, avoids the problems you think it has. Basis path testing doesn't tell you how to write verifiable code. (TDD does that.) More to the point, path testing doesn't tell you which assertions you need to make. That's your job as the human.
Source: this is my research area, but I read McCabe's paper on this exact subject a few years back: http://mccabe.com/pdf/mccabe-nist235r.pdf
来源:https://stackoverflow.com/questions/703570/whats-the-point-of-basis-path-coverage