TDD. When you can move on?

后端 未结 13 1664
南旧
南旧 2021-01-11 10:36

When doing TDD, how to tell \"that\'s enough tests for this class / feature\"?

I.e. when could you tell that you completed testing all edge cases?

相关标签:
13条回答
  • 2021-01-11 11:21

    Many of the other comments have hit the nail on the head. Do you feel confident about the code you have written given your test coverage? As your code evolves do your tests still adequately cover it? Do your tests capture the intended behaviour and functionality for the component under test?

    There must be a happy medium. As you add more and more test cases your tests may become brittle as what is considered an edge case continuously changes. Following many of the earlier suggestions it can be very helpful to get everything you can think of up front and then adding new tests as the software grows. This kind of organic grow can help your tests grow without all the effort up front.

    I am not going to lie but I often get lazy when going back to write additional tests. I might miss that property that contains 0 code or the default constructor that I do not care about. Sometimes not being completely anal about the process can save you time n areas that are less then critical (the 100% code coverage myth).

    You have to remember that the end goal is to get a top notch product out the door and not kill yourself testing. If you have that gut feeling like you are missing something then chances are you are have and that you need to add more tests.

    Good luck and happy coding.

    0 讨论(0)
  • 2021-01-11 11:21

    You could always use a test coverage tool like EMMA (http://emma.sourceforge.net/) or its Eclipse plugin EclEmma (http://www.eclemma.org/) or the like. Some developers believe that 100% test coverage is a worthy goal; others disagree.

    0 讨论(0)
  • 2021-01-11 11:22

    Tests in TDD are about covering the specification, in fact they can be a substitute for a specification. In TDD, tests are not about covering the code. They ensure the code covers the specification, because the code will fail a test if it doesn't cover the specification. Any extra code you have doesn't matter.

    So you have enough tests when the tests look like they describe all the expectations that you or the stakeholders have.

    0 讨论(0)
  • 2021-01-11 11:23

    Just try to come up with every way within reason that you could cause something to fail. Null values, values out of range, etc. Once you can't easily come up with anything, just continue on to something else.

    If down the road you ever find a new bug or come up with a way, add the test.

    It is not about code coverage. That is a dangerous metric, because code is "covered" long before it is "tested well".

    0 讨论(0)
  • 2021-01-11 11:25

    maybe i missed something somewhere in the Agile/XP world, but my understanding of the process was that the developer and the customer specify the tests as part of the Feature. This allows the test cases to substitute for more formal requirements documentation, helps identify the use-cases for the feature, etc. So you're done testing and coding when all of these tests pass...plus any more edge cases that you think of along the way

    0 讨论(0)
  • 2021-01-11 11:25

    Theoretically you should cover all possible input combinations and test that the output is correct but sometimes it's just not worth it.

    0 讨论(0)
提交回复
热议问题