Is there a way to calculate elapsed time for test methods ignoring time spent initializing?

后端 未结 4 1727
死守一世寂寞
死守一世寂寞 2021-01-19 15:52

This is the kind of question that treads the gray area between StackOverflow and SuperUser. As I suspect the answer is likely to involve code-related solutions, such as crea

相关标签:
4条回答
  • 2021-01-19 16:04

    Your tests should aim to answer questions. Questions such as: 1. Does my code behave as expected? 2. Does my code perform as expected?

    However, rather than relying on the testing framework's inner workings to time your code (a task to which it is particularly unsuited), consider instead writing tests that test the performance of specific code and/or routines.

    You could, for example, write a test method which starts a stopwatch, performs some work, stops the stopwatch and measures how long the operation took. You should then be able to assert that the test didn't exceed an expected maximum duration and, if it did, you'd see it as a failed test.

    This way you're not measuring the unpredictable performance of your testing infrastructure, you're actually testing the performance of your code.

    Also, as Aravol suggested, you could front-load the cost of your test setup by populating your Moq's in a static constructor since static constructors are executed before anything is newed or instance methods executed.

    0 讨论(0)
  • 2021-01-19 16:11

    I don't believe that there are many instances in which a lengthy test init is actually required. It would be interesting to know what kind of operations were being included in the poster's example.

    A bit of creativity is required to separate a TestInit's functions into ClassInit (someone earlier suggested use of a constructor... kind of the same thing, but errors in that block of code will report quite differently). For example, if every test need a List<> of strings that's read from a file, you split it this way:

    1) ClassInit - read the file, capture the strings into an array (the slow part) 2) TestInit - copy the array's elements into a List<> accessible by each test (the fast part)

    I'm against using statics to try to solve a test performance problem, it ruins each test's isolation from each other.

    I'm also against tests using things like StopWatches to assert on their own performance... running tests generates a report, so watchers of that report should identify tests that run too long. Also, if we want automated tests to exercise the performance of something, that's not a unit test, that's a performance test, and it can (should?) be something entirely different.

    0 讨论(0)
  • 2021-01-19 16:22

    I discovered today a method of handling expensive initialization in MSTest without it causing the tests to report as slow. I post this answer for consideration without accepting it because it does have a modest code smell.

    MSTest creates a new instance of the test class each time it runs a test. Because of this behavior, code written in an instance constructor occurs once per test. This is similar behavior to the [TestInitialize] method with one exception: MSTest begins timing the unit test after creating the instance of the test class and before executing the [TestInitialize] routine.

    As a result of this MSTest-specific behavior, one can put initialization code that should be omitted from the automatically-generated timing statistics in the constructor.

    To demonstrate what I mean, consider the following test and generated output.

    Test:

    public class ConstructorTest
    {
        public ConstructorTest()
        {
            System.Threading.Thread.Sleep(10000);
        }
    
        [TestMethod]
        public void Index()
        {
        }
    
        [TestMethod]
        public void About()
        {
        }
    }
    

    Output:

    Screenshot of results

    My Thoughts:

    The code above certainly produces the effect I was looking for; however, while it appears safe to use either a constructor or a [TestInitialize] method to handle initialization, I must assume that the latter exists in this framework for a good reason.

    There might be a case made that reports including initialization time in their calculations might be useful, such as when estimating how much real time a large set of tests should be expected to consume.

    Rich Turner's discussion about how time sensitive operations deserve stop watches with assertions is also worth recognizing (and has my vote). On the other hand, I look at the automatically generated timing reports provided by Visual Studio as a useful tool to identify tests that are getting out of hand without having to author timing boilerplate code in every test.

    In all, I am pleased to have found a solution and appreciate the alternatives discussed here as well.

    Cheers!

    0 讨论(0)
  • 2021-01-19 16:25

    The problem you run into is that the unit tests don't allow for changing of the time, or output of data - they just execute and finish.

    One way you can do this is to violate Unit Test standards and use a static reference and static constructor to prepare your backing data - while not technically guaranteed, VS 2013 does execute all Unit Tests in the same AppDomain (though via separate instances of the given TestClass)

    0 讨论(0)
提交回复
热议问题