How to skip the rest of tests in the class if one has failed?

前端 未结 9 2081
梦毁少年i
梦毁少年i 2020-11-28 06:45

I\'m creating the test cases for web-tests using Jenkins, Python, Selenium2(webdriver) and Py.test frameworks.

So far I\'m organizing my tests in the following str

相关标签:
9条回答
  • 2020-11-28 07:05

    Based on hpk42's answer, here's my slightly modified incremental mark that makes test cases xfail if the previous test failed (but not if it xfailed or it was skipped). This code has to be added to conftest.py:

    import pytest
    
    try:
        pytest.skip()
    except BaseException as e:
        Skipped = type(e)
    
    try:
        pytest.xfail()
    except BaseException as e:
        XFailed = type(e)
    
    def pytest_runtest_makereport(item, call):
        if "incremental" in item.keywords:
            if call.excinfo is not None:
                if call.excinfo.type in {Skipped, XFailed}:
                    return
    
                parent = item.parent
                parent._previousfailed = item
    
    def pytest_runtest_setup(item):
        previousfailed = getattr(item.parent, "_previousfailed", None)
        if previousfailed is not None:
            pytest.xfail("previous test failed (%s)" % previousfailed.name)
    

    And then a collection of test cases has to be marked with @pytest.mark.incremental:

    import pytest
    
    @pytest.mark.incremental
    class TestWhatever:
        def test_a(self):  # this will pass
            pass
    
        def test_b(self):  # this will be skipped
            pytest.skip()
    
        def test_c(self):  # this will fail
            assert False
    
        def test_d(self):  # this will xfail because test_c failed
            pass
    
        def test_e(self):  # this will xfail because test_c failed
            pass
    
    0 讨论(0)
  • 2020-11-28 07:06

    You might want to have a look at pytest-dependency. It is a plugin that allows you to skip some tests if some other test had failed. In your very case, it seems that the incremental tests that gbonetti discussed is more relevant.

    0 讨论(0)
  • 2020-11-28 07:15

    To complement hpk42's answer, you can also use pytest-steps to perform incremental testing, this can help you in particular if you wish to share some kind of incremental state/intermediate results between the steps.

    With this package you do not need to put all the steps in a class (you can, but it is not required), simply decorate your "test suite" function with @test_steps:

    from pytest_steps import test_steps
    
    def step_a():
        # perform this step ...
        print("step a")
        assert not False  # replace with your logic
    
    def step_b():
        # perform this step
        print("step b")
        assert not False  # replace with your logic
    
    @test_steps(step_a, step_b)
    def test_suite_no_shared_results(test_step):
        # Execute the step
        test_step()
    

    You can add a steps_data parameter to your test function if you wish to share a StepsDataHolder object between your steps.

    import pytest
    from pytest_steps import test_steps, StepsDataHolder
    
    def step_a(steps_data):
        # perform this step ...
        print("step a")
        assert not False  # replace with your logic
    
        # intermediate results can be stored in steps_data
        steps_data.intermediate_a = 'some intermediate result created in step a'
    
    def step_b(steps_data):
        # perform this step, leveraging the previous step's results
        print("step b")
    
        # you can leverage the results from previous steps... 
        # ... or pytest.skip if not relevant
        if len(steps_data.intermediate_a) < 5:
            pytest.skip("Step b should only be executed if the text is long enough")
    
        new_text = steps_data.intermediate_a + " ... augmented"  
        print(new_text)
        assert len(new_text) == 56
    
    @test_steps(step_a, step_b)
    def test_suite_with_shared_results(test_step, steps_data: StepsDataHolder):
    
        # Execute the step with access to the steps_data holder
        test_step(steps_data)
    

    Finally, you can automatically skip or fail a step if another has failed using @depends_on, check in the documentation for details.

    (I'm the author of this package by the way ;) )

    0 讨论(0)
  • 2020-11-28 07:19

    Or quite simply instead of calling py.test from cmd (or tox or wherever), just call:

    py.test --maxfail=1
    

    see here for more switches: https://pytest.org/latest/usage.html

    0 讨论(0)
  • 2020-11-28 07:20

    UPDATE: Please take a look at @hpk42 answer. His answer is less intrusive.

    This is what I was actually looking for:

    from _pytest.runner import runtestprotocol
    import pytest
    from _pytest.mark import MarkInfo
    
    def check_call_report(item, nextitem):
        """
        if test method fails then mark the rest of the test methods as 'skip'
        also if any of the methods is marked as 'pytest.mark.blocker' then
        interrupt further testing
        """
        reports = runtestprotocol(item, nextitem=nextitem)
        for report in reports:
            if report.when == "call":
                if report.outcome == "failed":
                    for test_method in item.parent._collected[item.parent._collected.index(item):]:
                        test_method._request.applymarker(pytest.mark.skipif("True"))
                        if test_method.keywords.has_key('blocker') and isinstance(test_method.keywords.get('blocker'), MarkInfo):
                            item.session.shouldstop = "blocker issue has failed or was marked for skipping"
                break
    
    def pytest_runtest_protocol(item, nextitem):
    # add to the hook
        item.ihook.pytest_runtest_logstart(
            nodeid=item.nodeid, location=item.location,
        )
        check_call_report(item, nextitem)
        return True
    

    Now adding this to conftest.py or as a plugin solves my problem.
    Also it's improved to STOP testing if the blocker test has failed. (meaning that the entire further tests are useless)

    0 讨论(0)
  • 2020-11-28 07:21

    I like the general "test-step" idea. I'd term it as "incremental" testing and it makes most sense in functional testing scenarios IMHO.

    Here is a an implementation that doesn't depend on internal details of pytest (except for the official hook extensions). Copy this into your conftest.py:

    import pytest
    
    def pytest_runtest_makereport(item, call):
        if "incremental" in item.keywords:
            if call.excinfo is not None:
                parent = item.parent
                parent._previousfailed = item
    
    def pytest_runtest_setup(item):
        previousfailed = getattr(item.parent, "_previousfailed", None)
        if previousfailed is not None:
            pytest.xfail("previous test failed (%s)" % previousfailed.name)
    

    If you now have a "test_step.py" like this:

    import pytest
    
    @pytest.mark.incremental
    class TestUserHandling:
        def test_login(self):
            pass
        def test_modification(self):
            assert 0
        def test_deletion(self):
            pass
    

    then running it looks like this (using -rx to report on xfail reasons):

    (1)hpk@t2:~/p/pytest/doc/en/example/teststep$ py.test -rx
    ============================= test session starts ==============================
    platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev17
    plugins: xdist, bugzilla, cache, oejskit, cli, pep8, cov, timeout
    collected 3 items
    
    test_step.py .Fx
    
    =================================== FAILURES ===================================
    ______________________ TestUserHandling.test_modification ______________________
    
    self = <test_step.TestUserHandling instance at 0x1e0d9e0>
    
        def test_modification(self):
    >       assert 0
    E       assert 0
    
    test_step.py:8: AssertionError
    =========================== short test summary info ============================
    XFAIL test_step.py::TestUserHandling::()::test_deletion
      reason: previous test failed (test_modification)
    ================ 1 failed, 1 passed, 1 xfailed in 0.02 seconds =================
    

    I am using "xfail" here because skips are rather for wrong environments or missing dependencies, wrong interpreter versions.

    Edit: Note that neither your example nor my example would directly work with distributed testing. For this, the pytest-xdist plugin needs to grow a way to define groups/classes to be sent whole-sale to one testing slave instead of the current mode which usually sends test functions of a class to different slaves.

    0 讨论(0)
提交回复
热议问题