TDD: why might it be wrong to let app code know it is being tested, not run?

后端 未结 5 463
说谎
说谎 2021-01-19 19:35

In this thread, Brian (the only answerer) says \"Your code should be written in such a fashion that it is testing-agnostic\"

The single comment says \"Your code shou

相关标签:
5条回答
  • 2021-01-19 19:41

    I will split this answer into two sections. First I'll share my thoughts on Brian's answer, then I'll share some tips on how to test effectively.

    An explanation of Brian's answer

    There appear to be two key ideas that Brian is hinting at. I will address each one individually.

    Idea 1: Production code should not depend on tests

    Your code should be written in such a fashion that it is testing-agnostic.

    The production code should not depend on tests. It should be the reverse.

    There are multiple reasons for this:

    1. Changing your tests will not change the behaviour of your code.
    2. Your production code can be compiled and deployed independently of the test code.
    3. Your code won't need to be recompiled when updating the tests.
    4. Your production code cannot possibly fail due to unintended side effects from not running the test code.

    Note: Any decent compiler will remove the test code. Although I don't think this is an excuse to poorly design/test your system.

    Idea 2: You should test abstractions rather than implementations

    Whatever environment you test in should be as close to real-world as possible.

    It sounds like Brian might be hinting at this idea within his answer. Unlike the last idea, this one isn't universally agreed upon, so take it with a grain of salt.

    By testing abstractions, you develop a level of respect for the unit being tested. You agree that you will not hoke around with its internals and spy on its internal state.

    Why shouldn't I spy on the state of objects during testing?

    By spying on the innards of an object, you are causing these problems:

    1. Your tests will tie you to a specific implementation of a unit.

      For example...
      Want to change your class to use a different sorting algorithm? Too bad, your tests will fail because you've asserted that the quicksort function must be called.

    2. You will break encapsulation.

      By testing the internal state of an object, you will be tempted to loosen some of the privacy that the object has. This will mean that more of your production code will also have increased visibility into your object.

      By loosening the encapsulation of your object, you are tempting other production code to also depend on it. This can not only tie your tests to a specific implementation, but also your entire system itself. You do not want this to happen.

    Then how do I know if the class works?

    Test the pre-conditions and post-conditions/results of the method being called. If you need more complex tests, look at the final section I've written on mocking and dependency injection.

    Mini note

    I don't think it's necessarily bad to have an if (TEST_MODE) in your main method as long as your production code remains independent of your tests.

    For example:

    public class Startup {
    
        private static final boolean TEST_MODE = false;
    
        public static void main(String[] args) {
            if (TEST_MODE) {
                TestSuite testSuite = new TestSuite();
                testSuite.execute();
            } else {
                Main main = new Main();
                main.execute();
            }
        }
    }
    

    However, it becomes a problem if your other classes know that they're running in test mode. If you have if (TEST_MODE) throughout all of your production code, you're opening yourself up to the problems I've mentioned above.

    Obviously in Java you would use something like JUnit or TestNG instead of this, but I just wanted to share my thoughts on the if (TEST_MODE) idea.

    How to test effectively

    This is a very large topic, so I'll keep this section of the answer short.

    • Instead of spying on internal state, use mocking and dependency injection.

      With mocks, you can assert that a method of a mock you've injected has been called. Better yet, the dependency injection will invert your classes' dependency on the implementation of whatever you've injected. This means you can swap out different implementations of things without needing to worry.

      This completely removes the need to hoke around inside your classes.


    If there was one book I'd strongly recommend reading, it would be Modern C++ Programming with Test-Driven Development by Jeff Langr. It's probably the best TDD resource I've ever used.

    Despite having C++ in the title, its main focus is definitely TDD. The introduction of the book talks about how these examples should apply across all (similar) languages. Uncle Bob even states this in the foreword:

    Do you need to be a C++ programmer to understand it? Of course you don't. The C++ code is so clean and is written so well and the concepts are so clear that any Java, C#, C, or even Ruby programmer will have no trouble at all.

    0 讨论(0)
  • 2021-01-19 19:48

    TDD: why might it be wrong to let app code know it is being tested, not run?

    1) Carl Manaster has brought a excellent and short answer. If your implementation has a different behavior according to that is tested or not, your test has no value as it doesn't reflect the real behavior of the application in production and therefore it doesn't validate the requirements.

    2) Test-Driven Development has no relation with the fact to let app code know it is being tested. Whatever the development methodology you use, you may introduce this type of error.

    With my TDD experience, I think that TDD prevents from letting the app code know it is being tested since as you write the unit test in first intention and you do it suitably you have the guarantee to have a naturally testable applicative code that validates the app requirements and that has no knowledge of the tested code.

    I imagine rather that this kind of error could happen more probably when you create the test code after writing the applicative code as you may be tempted to not refactor the applicative code to make your code testable and so to add some tricks in the implementation to bypass the refactoring task.

    3) Test-Driven Development is code that works but you cannot forget the design aspects of your app classes and your test classes when you use it.

    A trivial example of how this might help would be if you're actually creating a new class instance in the middle of a method and assigning it to a private field: private field mocks won't help in that case because you are replacing the private field. But actually creating a real object might be very costly: you might want to replace it with a lightweight version when testing.

    I encountered such a situation yesterday, in fact... and my solution was to create a new package-private method called createXXX()... so I could mock it. But this in turn goes against the dictum "thou shalt not create methods just to suit your tests"!

    Using the package-private modifier is in some cases acceptable but it should be used only if all natural ways of designing your code don't allow to have an acceptable solution.

    "thou shalt not create methods just to suit your tests" may be misleading.

    In fact I would say rather : "thou shalt not create methods to suit your tests and that open the API of the application in an undesirable way"

    In your example, when you want to modify a dependency of your code that you would like mock or substitute a dependency during test, if you practice TDD you should not modify directly the implementation but start the modification by the test code.
    And if you test code seems blocked because you miss a constructor, a method, an object, etc... to set a dependency to your tested class, you are forced to add in your tested class.
    It is the TDD way.

    Above, I referred to not opening the API more than needed. I will give two examples that provide a way of setting a dependency but that don't open the API in the same way.

    This way of doing is desirable because the client cannot change the behavior of MyClass in production :

    @Service
    public class MyClass{
    ...
    MyDependency myDependency;
    ...
    @Autowired
    public MyClass(MyDependency myDependency){
       this.myDependency = myDependency;
    }
     ...
    }
    

    This way of doing is less desirable because the MyClass API grows while the applicative code doesn't need it. Besides with this new method, the client can change the behavior of MyClass in production by using the setter of myDependency field:

    @Service
    public class MyClass{
    ...
    MyDependency myDependency;
    ...
    @Autowired
    public void setMyDependency(MyDependency myDependency){
       this.myDependency = myDependency;
    }
     ...
    }
    

    Just a remark : if you have more than 4 or 5 arguments in your constructor, it may become cumbersome to use it.
    If it happens, using setters is still probably not the best solution as the root of the problem is probably that the class has too many responsibilities. So it should be refactored if it is the case.

    0 讨论(0)
  • 2021-01-19 19:49

    I read all these answers quite closely and they are all helpful. But perhaps I should reclassify myself: I appear to be becoming a low-intermediate TDD practitioner, rather than a newb. A lot of these points and rules of thumb I have already assimilated, either by reading or by sometimes baffling, occasionally bitter but always instructive experience over the past 6 months or so.

    Carl Manaster's analogy with the Volkswagen scandal is seductive but slightly inapplicable, perhaps: I am not suggesting that the app code should "detect" that a test is happening and alter its behaviour as a result.

    What I am suggesting is that there are one or two knotty, bothersome low-level problems where you might want to use this tool in a way that does not interfere in any way with the cast-iron rules and "philosophy" of TDD.

    Two examples:

    I have a few cases in my code where exceptions are thrown, and tests where I want to check they are thrown. Fine: I go doThrow( ... ) and @Test( expected = ... ) and everything works fine. But during a production run I want an error message to be printed out with a stack trace. During a test run I just want the error message. I don't want the logback-test.xml to suppress error-level logging completely. But apparently there is no way to configure a logger to prevent printing out the stack trace.

    So what I can do is to have a method like this in the app code, contrived solely for testing:

    boolean suppressStacktrace(){ return false; };
    

    ... and then I use that as a test for a given LOGGER.error( ... situation, and then mock that method to return true when I want to provoke that exception during testing.

    Secondly, the rather specific case of console input: BufferedReader.readLine(). Substituting another InputStream for System.in and feeding it with a List of different Strings which will be served up once per readLine is a right pain in the provberbial. What I have done is to have a private field in the app class:

    Deque<String> inputLinesDeque;
    

    ... and a package-private method to set this with an List<String> of input lines, which can then be popped until the Deque is empty. During an app run this Deque is null, so an if branches to br.readline() instead.

    These are just 2 examples. No doubt there are other situations where the ultra-purist approach comes at too high a price, and arguably procures no real benefit.

    However, I appreciate davidxxx's superior definition of one of the TDD 10 commandments: "thou shalt not create methods to suit your tests and that open the API of the application in an undesirable way". Very helpful: food for thought.

    later

    Since writing this a month ago I've realised it's far from impossible to extend and modify logback classes... I assume it wouldn't be too difficult to make your own logback class that would indeed accept a configuration flag in logback-test.xml to "supress stack traces". And of course this bespoke logback class wouldn't have to be exported when you make an executable jar of your app ... but again, to me this comes in the category of "jumping through hoops". How "pure" does app code really need to be?

    0 讨论(0)
  • 2021-01-19 19:50

    Think of the big Volkswagen scandal. A system which behaves differently under test than under production load isn't really tested. That is: it is really two systems, the production system and the test system - and the only one of these which is tested is the test system. The production system, being different, is not tested. Every difference in behavior you introduce between the two systems is a testing vulnerability.

    0 讨论(0)
  • 2021-01-19 19:51

    a lot of tests have package-private access to the app classes

    I would advise against this, the idea of breaking encapsulation in production code feels like the tail wagging the dog to me. It suggests that the classes are too large and / or lack cohesion. TDD, dependency injection / inversion of control, mocking and writing single responsibility classes should remove the need for relaxing visibility.

    The single comment says "Your code should definitely not branch on a global "am I being tested flag".".

    Production code is production code and has no need to know about your tests. There should be no logic concerning tests in there, it's poor separation. Again, dependency injection / inversion of control would allow you to swap in test specific logic at runtime, that won't be included in the production artifact.

    0 讨论(0)
提交回复
热议问题