问题
Generally, I'm still very much a unit testing neophyte.
BTW, you may also see this question on other forums like xUnit.net, et cetera,
because it's an important question to me. I apoligize in advance for my
cross posting; your opinions are very important to me and not everyone
in this forum belongs to the other forums too.
I was looking at a large decade old legacy system which has had over 700 unit tests
written recently (700 is just a small beginning). The tests happen to be written
in MSTest but this question applies to all testing frameworks AFAIK.
When I ran, via vs2008 "ALL TESTS", the final count was only seven tests.
That's about 1% of the total tests that have been written to date.
MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests,
is available on CodePlex; those unit tests are also written in MSTest
even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team
as its Senior Programmer. All 2000 plus tests get run, not just a few.
QUESTION: given that AFAIK the purpose of unit tests is to identify breakages
in the SUT, am I correct in thinking that the "best practice" is to always,
or at least very frequently, run all of the tests?
updated 2010-05-22
First, thank you to everyone who has provided excellent answers. Your answers confirm my general conclusion that running all unit tests after every local rebuild is the best practice regardless whether one is practicing TDD (test before) or classic unit testing (test after).
imho, there is more than one best answer to this question but AFAIK SO lets me select just one, so in an effort to be fair, I've given the check mark to Pete Johns for being first and for earning the most up votes from the SO community. Finland's Esko Luontola also gave a great answer (I hope he's not getting buried in volcanic ash) and two very good links that are worth your time imho; definitely the link to F.I.R.S.T. is to me inspirational; AFAIK, only xUnit.net in the .NET world offers the "any order, any time". Esko's second link is to a truly excellent 92 minute video "Integration Tests Are a Scam" presented by J. B. (Joe) Rainsberger ( http://jbrains.ca where there is more content worth my time). BTW, Esko's weblog is also worth a visit http://orfjackal.net.
回答1:
Since you have tagged this question 'TDD', all of the unit tests for the module being developed should be executed (and pass, bar the newest one, until you make it pass) with every compilation. Unit tests in other modules should not break by development in the current module or else they are testing too much.
A continuous integration loop should also be in place to ensure that all of the tests are run with every check-in to your source control system. This will trap breakages early.
At the very least, a nightly build should run each and every test and any breakages be fixed first thing of a morning. Tolerate no unit test failures!
回答2:
It should be possible to run the unit tests quickly, so that you can run them after every simple change. As is said at http://agileinaflash.blogspot.com/2009/02/first.html
Tests must be fast. If you hesitate to run the tests after a simple one-liner change, your tests are far too slow. Make the tests so fast you don't have to consider them. [...] A software project will eventually have tens of thousands of unit tests, and team members need to run them all every minute or so without guilt. You do the math.
Personally my pain threshold is around 5-10 seconds; if it takes longer than that to compile and run all unit tests, it will seriously annoy and slow me down.
If there are slow integration tests (which should be avoided), they can be run on every check-in. Preferably by the developer before he checks in and then once more on the continuous integration server.
回答3:
The best practice is certainly to run the unit tests at or before check in. Of course it is possible that the unit tests passed on the machine before check in but then when part of the actual code base other updates conspired to break a unit test, so they should really be run on the complete code as well.
There are tools to help you with this, such as TeamCity, which will allow you to conditionally check in, where it runs the tests for you before checking in, and runs a build, including tests if so configured, after every check in. (This practice is called continuous integration).
Regardless of the best practice, reality is that if you run your tests too late it makes it much harder to track down the cause of the failure, and over time failing tests will either be commented out or even worse tests will be allowed to remain as failing, causing new failing tests to go unnoticed.
回答4:
Since unit tests can be automated, I would suggest running them as frequently as possible, especially when you have this many.
A best practice is to run unit tests as part of a nightly build process.
回答5:
Ideally the unit tests should be run as part of every build and prior to every check in of changed code.
However, with a large number of tests this can take a significant time, so you need to be able to manage that.
Only running a subset of test might be adequate if the subset were rotated and the full set run once a week (say), but that still leaves time for a breaking change to be left in the code base for a few days.
You have to run all the tests prior to a checkin, as you have no other way of knowing that your change hasn't had an adverse effect on some other part of the code. I've seen this time and again and without unit tests it's very hard to spot.
来源:https://stackoverflow.com/questions/2882395/how-often-should-the-entire-suite-of-a-systems-unit-tests-be-run