xdist

pytest: How to get a list of all failed tests at the end of the session? (and while using xdist)

人盡茶涼 提交于 2021-02-18 08:59:27
问题 I would like to have a list of all the tests that have failed to be used at the end of session . Pytest lets you define a hook pytest_sessionfinish(session, exitstatus) , that is called at the end of the session, where I wish to have that list. session is a _pytest.main.Session instance that has the attribute items (type list ), but I couldn't find whether the each item in that list passed of failed. How can a list of all failed tests could be retrieved at the end of the session? How can it

Is there a way to control how pytest-xdist runs tests in parallel?

廉价感情. 提交于 2020-03-17 07:43:11
问题 I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff

py.test with xdist is not executing tests parametrized with random values

我与影子孤独终老i 提交于 2019-12-12 14:45:17
问题 Does anybody noticed the following strange behaviour for pytest and xdist. When trying to run the test that is parametrized with some randomly selected values the test are not actualy run. The same test is executed without any problems if xdist is not used. Following code can be used to reproduce this. import pytest import random PARAMS_NUMBER = 3 PARAMS = [] for i in range(PARAMS_NUMBER): PARAMS.append(random.randrange(0, 1000)) @pytest.mark.parametrize('rand_par', PARAMS) def test_random

pytest + xdist without capturing output?

蹲街弑〆低调 提交于 2019-12-10 12:33:45
问题 I'm using pytest with pytest-xdist for parallel test running. It doesn't seem to honour the -s option for passing through the standard output to the terminal as the tests are run. Is there any way to make this happen? I realise this could cause the output from the different processes to be jumbled up in the terminal but I'm ok with that. 回答1: I found a workaround, although not a full solution. By redirecting stdout to stderr, the output of print statements is displayed. This can be

How to print output when using pytest with xdist

冷暖自知 提交于 2019-12-10 01:59:18
问题 I'm using py.test to run tests. I'm using it with pytest-xdist to run the tests in parallel. I want to see the output of print statements in my tests. I have: Ubuntu 15.10, Python 2.7.10, pytest-2.9.1, pluggy-0.3.1. Here's my test file: def test_a(): print 'test_a' def test_b(): print 'test_b' When I run py.test , nothing is printed. That's expected: by default, py.test captures output. When I run py.test -s , it prints test_a and test_b , as it should. When I run py.test -s -n2 , again

Dynamically control order of tests with pytest

…衆ロ難τιáo~ 提交于 2019-12-07 11:27:32
问题 I would like to control the order of my tests using logic which will reorder them on the fly, while they are already running. My use case is this: I am parallelizing my tests with xdist, and each test uses external resources from a common and limited pool. Some tests use more resources than others, so at any given time when only a fraction of the resources are available, some of the tests have the resources they need to run and others don't. I want to optimize the usage of the resources, so I

Dynamically control order of tests with pytest

China☆狼群 提交于 2019-12-05 15:56:51
I would like to control the order of my tests using logic which will reorder them on the fly, while they are already running. My use case is this: I am parallelizing my tests with xdist, and each test uses external resources from a common and limited pool. Some tests use more resources than others, so at any given time when only a fraction of the resources are available, some of the tests have the resources they need to run and others don't. I want to optimize the usage of the resources, so I would like to dynamically choose which test will run next, based on the resources currently available.