Continuing in Python's unittest when an assertion fails

前端 未结 12 1506
感情败类
感情败类 2020-11-27 14:03

EDIT: switched to a better example, and clarified why this is a real problem.

I\'d like to write unit tests in Python that continue executing when an assertion fails

相关标签:
12条回答
  • 2020-11-27 14:35

    One option is assert on all the values at once as a tuple.

    For example:

    class CarTest(unittest.TestCase):
      def test_init(self):
        make = "Ford"
        model = "Model T"
        car = Car(make=make, model=model)
        self.assertEqual(
                (car.make, car.model, car.has_seats, car.wheel_count),
                (make, model, True, 4))
    

    The output from this tests would be:

    ======================================================================
    FAIL: test_init (test.CarTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "C:\temp\py_mult_assert\test.py", line 17, in test_init
        (make, model, True, 4))
    AssertionError: Tuples differ: ('Ford', 'Ford', True, 3) != ('Ford', 'Model T', True, 4)
    
    First differing element 1:
    Ford
    Model T
    
    - ('Ford', 'Ford', True, 3)
    ?           ^ -          ^
    
    + ('Ford', 'Model T', True, 4)
    ?           ^  ++++         ^
    

    This shows that both the model and the wheel count are incorrect.

    0 讨论(0)
  • 2020-11-27 14:38

    Do each assert in a separate method.

    class MathTest(unittest.TestCase):
      def test_addition1(self):
        self.assertEqual(1 + 0, 1)
    
      def test_addition2(self):
        self.assertEqual(1 + 1, 3)
    
      def test_addition3(self):
        self.assertEqual(1 + (-1), 0)
    
      def test_addition4(self):
        self.assertEqaul(-1 + (-1), -1)
    
    0 讨论(0)
  • 2020-11-27 14:38

    I liked the approach by @Anthony-Batchelor, to capture the AssertionError exception. But a slight variation to this approach using decorators and also a way to report the tests cases with pass/fail.

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    
    import unittest
    
    class UTReporter(object):
        '''
        The UT Report class keeps track of tests cases
        that have been executed.
        '''
        def __init__(self):
            self.testcases = []
            print "init called"
    
        def add_testcase(self, testcase):
            self.testcases.append(testcase)
    
        def display_report(self):
            for tc in self.testcases:
                msg = "=============================" + "\n" + \
                    "Name: " + tc['name'] + "\n" + \
                    "Description: " + str(tc['description']) + "\n" + \
                    "Status: " + tc['status'] + "\n"
                print msg
    
    reporter = UTReporter()
    
    def assert_capture(*args, **kwargs):
        '''
        The Decorator defines the override behavior.
        unit test functions decorated with this decorator, will ignore
        the Unittest AssertionError. Instead they will log the test case
        to the UTReporter.
        '''
        def assert_decorator(func):
            def inner(*args, **kwargs):
                tc = {}
                tc['name'] = func.__name__
                tc['description'] = func.__doc__
                try:
                    func(*args, **kwargs)
                    tc['status'] = 'pass'
                except AssertionError:
                    tc['status'] = 'fail'
                reporter.add_testcase(tc)
            return inner
        return assert_decorator
    
    
    
    class DecorateUt(unittest.TestCase):
    
        @assert_capture()
        def test_basic(self):
            x = 5
            self.assertEqual(x, 4)
    
        @assert_capture()
        def test_basic_2(self):
            x = 4
            self.assertEqual(x, 4)
    
    def main():
        #unittest.main()
        suite = unittest.TestLoader().loadTestsFromTestCase(DecorateUt)
        unittest.TextTestRunner(verbosity=2).run(suite)
    
        reporter.display_report()
    
    
    if __name__ == '__main__':
        main()
    

    Output from console:

    (awsenv)$ ./decorators.py 
    init called
    test_basic (__main__.DecorateUt) ... ok
    test_basic_2 (__main__.DecorateUt) ... ok
    
    ----------------------------------------------------------------------
    Ran 2 tests in 0.000s
    
    OK
    =============================
    Name: test_basic
    Description: None
    Status: fail
    
    =============================
    Name: test_basic_2
    Description: None
    Status: pass
    
    0 讨论(0)
  • 2020-11-27 14:40

    It is considered an anti-pattern to have multiple asserts in a single unit test. A single unit test is expected to test only one thing. Perhaps you are testing too much. Consider splitting this test up into multiple tests. This way you can name each test properly.

    Sometimes however, it is okay to check multiple things at the same time. For instance when you are asserting properties of the same object. In that case you are in fact asserting whether that object is correct. A way to do this is to write a custom helper method that knows how to assert on that object. You can write that method in such a way that it shows all failing properties or for instance shows the complete state of the expected object and the complete state of the actual object when an assert fails.

    0 讨论(0)
  • 2020-11-27 14:42

    There is a soft assertion package in PyPI called softest that will handle your requirements. It works by collecting the failures, combining exception and stack trace data, and reporting it all as part of the usual unittest output.

    For instance, this code:

    import softest
    
    class ExampleTest(softest.TestCase):
        def test_example(self):
            # be sure to pass the assert method object, not a call to it
            self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
            # self.soft_assert(self.assertEqual('Worf', 'wharf', 'Klingon is not ship receptacle')) # will not work as desired
            self.soft_assert(self.assertTrue, True)
            self.soft_assert(self.assertTrue, False)
    
            self.assert_all()
    
    if __name__ == '__main__':
        softest.main()
    

    ...produces this console output:

    ======================================================================
    FAIL: "test_example" (ExampleTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "C:\...\softest_test.py", line 14, in test_example
        self.assert_all()
      File "C:\...\softest\case.py", line 138, in assert_all
        self.fail(''.join(failure_output))
    AssertionError: ++++ soft assert failure details follow below ++++
    
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    The following 2 failures were found in "test_example" (ExampleTest):
    ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
    Failure 1 ("test_example" method)
    +--------------------------------------------------------------------+
    Traceback (most recent call last):
      File "C:\...\softest_test.py", line 10, in test_example
        self.soft_assert(self.assertEqual, 'Worf', 'wharf', 'Klingon is not ship receptacle')
      File "C:\...\softest\case.py", line 84, in soft_assert
        assert_method(*arguments, **keywords)
      File "C:\...\Python\Python36-32\lib\unittest\case.py", line 829, in assertEqual
        assertion_func(first, second, msg=msg)
      File "C:\...\Python\Python36-32\lib\unittest\case.py", line 1203, in assertMultiLineEqual
        self.fail(self._formatMessage(msg, standardMsg))
      File "C:\...\Python\Python36-32\lib\unittest\case.py", line 670, in fail
        raise self.failureException(msg)
    AssertionError: 'Worf' != 'wharf'
    - Worf
    + wharf
     : Klingon is not ship receptacle
    
    +--------------------------------------------------------------------+
    Failure 2 ("test_example" method)
    +--------------------------------------------------------------------+
    Traceback (most recent call last):
      File "C:\...\softest_test.py", line 12, in test_example
        self.soft_assert(self.assertTrue, False)
      File "C:\...\softest\case.py", line 84, in soft_assert
        assert_method(*arguments, **keywords)
      File "C:\...\Python\Python36-32\lib\unittest\case.py", line 682, in assertTrue
        raise self.failureException(msg)
    AssertionError: False is not true
    
    
    ----------------------------------------------------------------------
    Ran 1 test in 0.000s
    
    FAILED (failures=1)
    

    NOTE: I created and maintain softest.

    0 讨论(0)
  • 2020-11-27 14:42

    I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.

    I prefer to stick to one assertion per test function (or more specifically asserting one concept per test) and would rewrite test_addition() as four separate test functions. This would give more useful information on failure, viz:

    .FF.
    ======================================================================
    FAIL: test_addition_with_two_negatives (__main__.MathTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_addition.py", line 10, in test_addition_with_two_negatives
        self.assertEqual(-1 + (-1), -1)
    AssertionError: -2 != -1
    
    ======================================================================
    FAIL: test_addition_with_two_positives (__main__.MathTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_addition.py", line 6, in test_addition_with_two_positives
        self.assertEqual(1 + 1, 3)  # Failure!
    AssertionError: 2 != 3
    
    ----------------------------------------------------------------------
    Ran 4 tests in 0.000s
    
    FAILED (failures=2)
    

    If you decide that this approach isn't for you, you may find this answer helpful.

    Update

    It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for make and one for model. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.

    The second concept is more questionable... You're testing whether some default values are initialised. Why? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?).

    Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.

    FF
    ======================================================================
    FAIL: test_creation_defaults (__main__.CarTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_car.py", line 25, in test_creation_defaults
        self.assertEqual(self.car.wheel_count, 4)  # Failure!
    AssertionError: 3 != 4
    
    ======================================================================
    FAIL: test_creation_parameters (__main__.CarTest)
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "test_car.py", line 20, in test_creation_parameters
        self.assertEqual(self.car.model, self.model)  # Failure!
    AssertionError: 'Ford' != 'Model T'
    
    ----------------------------------------------------------------------
    Ran 2 tests in 0.000s
    
    FAILED (failures=2)
    
    0 讨论(0)
提交回复
热议问题