问题
In my most C++ project I heavily used ASSERTION statement as following:
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
if(!fantasticData)
return -1;
// ,,,
return WOW_VALUE;
}
But TDD community seems like to enjoy doing something like this:
int doMoreWonderfulThings(const int* fantasticData)
{
if(!fantasticData)
return ERROR_VALUE;
// ...
return AHA_VALUE;
}
TEST(TDD_Enjoy)
{
ASSERT_EQ(ERROR_VALUE, doMoreWonderfulThings(0L));
ASSERT_EQ(AHA_VALUE, doMoreWonderfulThings("Foo"));
}
Just with my experiences first approaches let me remove so many subtle bugs. But TDD approaches are very smart idea to handle legacy codes.
"Google" - they compare "FIRST METHOD" to "Walk the shore with life-vest, swim ocean without any safe guard".
Which one is better? Which one makes software robust?
回答1:
In my (limited) experience the first option is quite a bit safer. In a test-case you only test predefined input and compare the outcome, this works well as long as every possible edge-case has been checked. The first option just checks every input and thus tests the 'live' values, it filters out bugs real quickly, however it comes with a performance penalty.
In Code Complete Steve McConnell learns us the first method can be used successfully to filter out bugs in a debug build. In release build you can filter-out all assertions (for instance with a compiler flag) to get the extra performance.
In my opinion the best way is to use both methods:
Method 1 to catch illegal values
int doWonderfulThings(const int* fantasticData)
{
ASSERT(fantasticData);
ASSERTNOTEQUAL(0, fantasticData)
return WOW_VALUE / fantasticData;
}
and method 2 to test edge-cases of an algorithm.
int doMoreWonderfulThings(const int fantasticNumber)
{
int count = 100;
for(int i = 0; i < fantasticNumber; ++i) {
count += 10 * fantasticNumber;
}
return count;
}
TEST(TDD_Enjoy)
{
// Test lower edge
ASSERT_EQ(0, doMoreWonderfulThings(-1));
ASSERT_EQ(0, doMoreWonderfulThings(0));
ASSERT_EQ(110, doMoreWonderfulThings(1));
//Test some random values
ASSERT_EQ(350, doMoreWonderfulThings(5));
ASSERT_EQ(2350, doMoreWonderfulThings(15));
ASSERT_EQ(225100, doMoreWonderfulThings(150));
}
回答2:
Both mechanisms have value. Any decent test framework will catch the standard assert() anyway, so a test run that causes the assert to fail will result in a failed test.
I typically have a series of asserts at the start of each c++ method with a comment '// preconditions'; it's just a sanity check on the state I expect the object to have when the method is called. These dovetail nicely into any TDD framework because they not only work at runtime when you're testing functionality but they also work at test time.
回答3:
There is no reason why your test package cannot catch asserts such as the one in doMoreWonderfulThings. This can be done either by having your ASSERT handler support a callback mechanism, or your test asserts contain a try/catch block.
回答4:
I don't know which particlar TDD subcommunity you're refering to but the TDD patterns I've come across either use Assert.AreEqual() for positive results or otherwise use an ExpectedException mechanism (e.g., attributes in .NET) to declare the error that should be observed.
回答5:
In C++, I prefer method 2 when using most testing frameworks. It usually makes for easier to understand failure reports. This is invaluable when a test months to years after the test was written.
My reason is that most C++ testing frameworks will print out the file and line number of where the assert occurred without any kind of stack trace information. So most of the time you will get the reporting line number inside of the function or method and not inside of the test case.
Even if the assert is caught and re-asserted from the caller the reporting line will be with the catch statement and may not be anywhere close to the test case line which called the method or function that asserted. This can be really annoying when the function that asserted may have been used on multiple times in the test case.
There are exceptions though. For example, Google's test framework has a scoped trace statement which will print as part of the trace if an exception occurs. So you can wrap a call to generalized test function with the trace scope and easily tell, within a line or two, which line in the exact test case failed.
来源:https://stackoverflow.com/questions/17181/test-cases-vs-assertion-statement