问题
I have a "recipe" method which i am trying to write using TDD. It basically calls out to different methods and occasionally makes decisions based on results of these methods:
public void HandleNewData(Data data)
{
var existingDataStore = dataProvider.Find(data.ID);
if (data == null)
return;
UpdateDataStore(existingDataStore, data, CurrentDateTime);
NotifyReceivedData(data);
if (!dataValidator.Validate(data))
return;
//... more operations similar to above
}
My knee jerk reaction would be to start writing test cases where I verify that HandleNewData calls the methods seen above passing the expected arguments and that it returns in those cases where the method call fails. But this feels to me kind of like this is a huge investment in time to code such a test up with little to no actual benefit.
So what is the real benefit of writing such a test? Or is it really not worth the bother?
It seems like it is just an over-specification of code it self, and will lead to maintenance problems whenever that code has to call another method or decide to not call one of the current methods anymore.
回答1:
TDD does not mean writing unit tests for code that already exists (although sometimes it may be necessary when improving legacy code).
You've probably heard the term "Red, Green, Refactor". This is the approach we take when doing TDD. Here are the three laws of Test-Driven Development, which take that a little further...
- You may not write production code until you have written a failing unit test.
- You may not write more of a unit test than is sufficient to fail, and not compiling is failing.
- You may not write more production code than is sufficient to pass the currently failing test.
The benefits of taking this approach is that you end up with very close to 100% unit-test coverage and you know that your code works exactly as specified.
It will reduce maintenance problems because as soon as somebody makes a change to your code and runs the tests, they will know if they have broken anything.
In this case, I would incrementally add unit tests for the methods being called from HandleNewData()
before adding any for HandleNewData()
.
Adding unit tests to legacy code is tough, but doable and very much worth the effort. If you haven't yet, I really recommend reading Working Effectively with Legacy Code by Michael Feathers. I've found it invaluable when adding unit tests to a 25-year-old code-base.
回答2:
The problem you've come across is very common. You've got some nasty untested legacy code which does way too much and is tightly coupled to too many collaborators. Writing a test for this is indeed painful.
The problem is that you're unfortunately saddled with this code-debt and, at some point, you're going to have to pay it.
So to start paying some of this debt, if you need to change this code, i would mock out as much as possible to get a single pass of the method under test running so that you can get a shell of a test in place to the point where you can add in your new functionality. If at all possible, i would make your new functionality a single call to another collaborator which is where you can put (and test-drive!) your new code.
That way you have some basic confidence that the old code calls your new code and that the new code has been nicely built up through TDD properly.
You still of course have the code-debt of the original legacy code but you can tackle that as a separate problem.
回答3:
Since you already clarified it is "Legacy Code (TM)", I'll go easy on the design of that method. The name itself is vague and that is reflected in the contents of the method. I'll take a look at improving the design a little bit - it seems to be doing a lot.
But to do that, I'd have to ensure that I'm not making it worse under the pretext of "making it better". How do I prove that ? Tests!
So I'd begin by putting "vice" tests on "chunks" of functionality of top-level objects. So I'd put in tests that verify the behavior of "HandleNewData" to the best of my knowledge today (this may included some code-excavation)
- looks up the data store for the ID
- updates the data store with the new data and revises the timestamp
- notifies interested listeners
- validates the data (it should be step 2 from the look of it) etc..
Once I have the existing behavior pinned down by some automated "vice" tests, I can now make changes/improvements with confidence. It could also be the case, that once the design is refactored, the containing type for HandleNewData is no longer needed. In which cases, you could blow off these tests - however the value of these tests between EXISTING-IMPROVED cannot be overlooked.
来源:https://stackoverflow.com/questions/6103807/unit-testing-philosophy