Performance testing best practices when doing TDD?

后端 未结 9 980
轻奢々
轻奢々 2021-02-04 12:30

I\'m working on a project which is in serious need of some performance tuning.

How do I write a test that fails if my optimizations do not in improve the speed of the pr

相关标签:
9条回答
  • 2021-02-04 13:33

    Not faced this situation yet ;) however if I did, here's how I'd go about it. (I think I picked this up from Dave Astel's book)

    Step#1: Come up with a spec for 'acceptable performance' so for example, this could mean 'The user needs to be able to do Y in N secs (or millisecs)'
    Step#2: Now write a failing test.. Use your friendly timer class (e.g. .NET has the StopWatch class) and Assert.Less(actualTime, MySpec)
    Step#3: If the test already passes, you're done. If red, you need to optimize and make it green. As soon as the test goes green, the performance is now 'acceptable'.

    0 讨论(0)
  • 2021-02-04 13:33

    Whilst I broadly agree with Carl Manaster's answer, with modern tools it's possible to get some of the advantages that TDD offers for functional testing into performance testing.

    With most modern performance testing frameworks (most of my experience is with Gatling, but I believe that same's true of newer versions of most performance test frameworks), it's possible to integrate automated performance tests into the continuous integration build, and configure it so that the CI build will fail if the performance requirements aren't met.

    So provided it's possible to agree beforehand what your performance requirements are (which for some applications may be driven by SLAs agreed with users or clients), this can give you rapid feedback if a change has created a performance issue, and identify areas that need performance improvements.

    Good performance requirements are along the lines of "when there are 5000 orders per hour, 95% of user journeys should include no more than 10 seconds of waiting, and no screen transition taking more than 1 second".

    This also relies on having deployment to a production-like test environment in your CI pipeline.

    However, it's probably still not a good idea to use performance requirements to drive your development in the same way that you could with functional requirements. With functional requirements, you generally have some insight into whether your application will pass the test before you run it, and it's sensible to try to write code that you think will pass. With performance, trying to optimize code whose performance hasn't been measured is a dubious practice. You can use performance results to drive your application development to some extent, just not performance requirements.

    0 讨论(0)
  • 2021-02-04 13:36

    Run the tests + profiling in CI server. You can also run load tests periodically.

    You are concerned about differences (as you mentioned), so its not about defining an absolute value. Have an extra step that compares the performance measures of this run with the one of the last build, and report on the differences as %. You can raise a red flag for important variations of time.

    If you are concerned on performance, you should have clear goals you want to meet and assert them. You should measure those with tests on the full system. Even if your application logic is fast, you might have issues with the view causing you to miss the goal. You can also combine it with the differences approach, but for these you would have less tolerance to time variations.

    Note that you can run the same process in your dev computer, just using only the previous runs in that computer and not a shared one between developers.

    0 讨论(0)
提交回复
热议问题