How much should each of my unit tests examine? For instance I have this test
[TestMethod]
public void IndexReturnsAView()
{
IActivityRepository repository = GetPopulatedRepository();
ActivityController activityController = GetActivityController(repository);
ActionResult result = activityController.Index();
Assert.IsInstanceOfType(result, typeof(ViewResult));
}
and also
[TestMethod]
public void IndexReturnsAViewWithAListOfActivitiesInModelData()
{
IActivityRepository repository = GetPopulatedRepository();
ActivityController activityController = GetActivityController(repository);
ViewResult result = activityController.Index() as ViewResult;
Assert.IsInstanceOfType(result.ViewData.Model, typeof(List<Activity>));
}
Obviously if the first test fails then so will the second test so should these two be combined into one test with two asserts? My feeling is that the more granular the tests and the less each test checks the faster it will be to find the causes of failures. However there is overhead to having a huge number of very small tests which might cost time in running all the tests.
I'd recommend breaking them down as much as possible.
There are lots of reasons for this, IMHO the most important ones are:
When one of your tests fails, you want to be able to isolate exactly what went wrong as quickly and as safely as possible. Having each test-method only test one single thing is the best way to achieve this.
Each test needs to start with a clean slate. If you create the repository once and then use it in 2 or more tests, then you have an implicit dependency on the order of those tests. Say Test1 adds an item to the repository but forgets to delete it. Test2's behavior will now be different, and possibly cause your test to fail. The only exception to this is immutable data.
Regarding your speed concerns, I wouldn't worry about it. For pure code-crunching like this, .NET is very fast, and you'll never be able to tell the difference. As soon as you get out of code-crunching and into things like databases, then you'll feel the performance issues, but as soon as you do that you run into all the "clean slate" issues as described above, so you may just have to live with it (or make as much of your data immutable as possible).
Best of luck with your testing.
The more fine-grained the better. When an assert fails in a test case, the test case is not run any further. The latter parts of the case could potentially uncover other errors.
If there's shared code between test cases, use setup/teardown functions to take care of that without repeating yourself too much. Time cost is often negligible. If the setup/teardown takes too much time, you're probably not doing unit testing but some higher level automated testing. Unit tests ideally should not have file system, network, database etc. dependencies.
I think the "standard" answer is that it should get to the point that if there is a bug in the code, it should break one test, but not hide any other failures (not stop the other tests from running) when this one fails. Each test tests one thing and two tests don't test the same thing. That's an ideal, not always acheivable. Call it guidance.
That being said, it is really an art. I would put aside performance issues initially, and focus more on maintainability. There you have two and a half to three lines of duplication. If the design changes, that will get hard to maintain. The duplication per se can be solved by a setup method an a filed in the class in this case, but the main thing to worry about is maintainability.
The tests should be small enough to be maintainable, easy to understand, and something that makes it reasonable for others (or you after time has passed) to understand what the code is doing and be able to maintain the tests.
A unit test should exactly test that what is described in your technical design in perspective of the functional design.
The approach on how much a test tests is definitely something that you need to decide up front and stick to it. I don't believe that everyone should follow the same approach uniformly, since different teams and/or projects have different priorities towards coding, performance, troubleshooting, test infrastructure, etc. But being consistent will always help:
- quicker identify problems since you know in advance how deep to dig;
- spend less time in constructing your tests;
- employ the same set of test helper classes while implementing tests.
- run tests quick enough: not too fast and not too slow.
- organizing tests (suites, packages, etc.)
If you decide that performance is more important then implement thicker tests with more validations/asserttions. If you decide that troubleshooting is a paramount then isolate your tests as much as needed. I can't see why thick and well structured tests are flawed. Such tests would get the same job done as greater amount of thinner tests and just as well.
Of course, every test still needs to focus on particular function/feature but this is not really topic of this thread.
来源:https://stackoverflow.com/questions/962821/how-much-should-each-unit-test-test