问题
I have a test where I have a setup method, that should receive a dataset, and a test function, that should run for each data in dataset
Basically I would need something like:
datasetA = [data1_a, data2_a, data3_a]
datasetB = [data1_b, data2_b, data3_b]
@pytest.fixture(autouse=True, scope="module", params=[datasetA, datasetB])
def setup(dataset):
#do setup
yield
#finalize
#dataset should be the same instantiated for the setup
@pytest.mark.parametrize('data', [data for data in dataset])
def test_data(data):
#do test
It should run like:
- setup(datasetA)
- test(data1_a)
- test(data2_a)
- test(data3_a)
- setup(datasetB)
- test(data1_b)
- test(data2_b)
- test(data3_b)
However it does not seem to be possible to parametrize over a variable obtained by a fixture, as I wanted to in the example.
I could have my function use a fixture and iterate inside the test method:
def test_data(dataset):
for data in dataset:
#do test
But then I would have one large test instead of a separate test for each case, which I would not like to have.
Is there any way of accomplishing this?
Thanks!
回答1:
Answer #1: If strictly following you test design, then it should look like this:
import pytest
datasetA = [10, 20, 30]
datasetB = [100, 200, 300]
@pytest.fixture
def dataset(request):
#do setup
items = request.param
yield items
#finalize
@pytest.fixture
def item(request, dataset):
index = request.param
yield dataset[index]
#dataset should be the same instantiated for the setup
@pytest.mark.parametrize('dataset', [datasetA, datasetB], indirect=True)
@pytest.mark.parametrize('item', [0, 1, 2], indirect=True)
def test_data(dataset, item):
print(item)
#do test
Note the indirect parametrization for both item
& dataset
. The parameter values will be passed to the same-named fixture as request.param
. In this case, we use the index in the assumption that the datasets are of the same length of 3 items.
Here is how it executes:
$ pytest -s -v -ra test_me.py
test_me.py::test_data[0-dataset0] 10
PASSED
test_me.py::test_data[0-dataset1] 100
PASSED
test_me.py::test_data[1-dataset0] 20
PASSED
test_me.py::test_data[1-dataset1] 200
PASSED
test_me.py::test_data[2-dataset0] 30
PASSED
test_me.py::test_data[2-dataset1] 300
PASSED
回答2:
Answer #2: You can also inject into the collection & parametrization stage of pytest via the pseudo-plugin named conftest.py
in the current directory:
conftest.py
:
import pytest
datasetA = [100, 200, 300]
datasetB = [10, 20, 30]
def pytest_generate_tests(metafunc):
if 'data' in metafunc.fixturenames:
for datasetname, dataset in zip(['A', 'B'], [datasetA, datasetB]):
for data in dataset:
metafunc.addcall(dict(data=data), id=datasetname+str(data))
test_me.py
:
def test_data(data):
print(data)
#do test
Run:
$ pytest -ra -v -s test_me.py
test_me.py::test_data[A100] 100
PASSED
test_me.py::test_data[A200] 200
PASSED
test_me.py::test_data[A300] 300
PASSED
test_me.py::test_data[B10] 10
PASSED
test_me.py::test_data[B20] 20
PASSED
test_me.py::test_data[B30] 30
PASSED
However, making dataset
indirect (i.e. accessible via the fixture with the setup & teardown stages) becomes difficult here, since metafunc.addcall()
does nt support indirect parameters.
The only way to add the indirect=...
is via metafunc.parametrize()
. But in that case, assuming that the datasets are of different sizes, you will have to build the whole list of dataset-dataitem pairs:
conftest.py
:
import pytest
datasetA = [100, 200, 300]
datasetB = [10, 20, 30]
datasets = [datasetA, datasetB]
def pytest_generate_tests(metafunc):
if 'data' in metafunc.fixturenames:
metafunc.parametrize('dataset, data', [
(dataset, data)
for dataset in datasets
for data in dataset
], indirect=['dataset'], ids=[
'DS{}-{}'.format(idx, str(data))
for idx, dataset in enumerate(datasets)
for data in dataset
])
@pytest.fixture()
def dataset(request):
#do setup
yield request.param
#finalize
test_me.py
:
def test_data(dataset, data):
print(data)
#do test
Run:
$ pytest -ra -v -s test_me.py
test_me.py::test_data[DS0-100] 100
PASSED
test_me.py::test_data[DS0-200] 200
PASSED
test_me.py::test_data[DS0-300] 300
PASSED
test_me.py::test_data[DS1-10] 10
PASSED
test_me.py::test_data[DS1-20] 20
PASSED
test_me.py::test_data[DS1-30] 30
PASSED
回答3:
pytest-cases
offers two ways to solve this problem
@cases_data
, a decorator that you can use on your test function or fixture so that it sources its parameters from various "case functions", possibly in various modules, and possibly themselve parametrized. The problem is that "case functions" are not fixtures, and therefore do not allow you to benefit from the dependencies and setup/teardown mechanism. I use it rather to collect various cases from file system.more recent but more 'pytest-y',
fixture_union
allows you to create a fixture that is the union of two or more fixtures. This includes setup/teardown and dependencies, so it is what you would prefer here. You can create a union either explicitly or by usingpytest_parametrize_plus
withfixture_ref()
in the parameter values.
Here is how your example would look:
import pytest
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Dataset A
DA = ['data1_a', 'data2_a', 'data3_a']
DA_data_indices = list(range(len(DA)))
@pytest_fixture_plus(scope="module")
def datasetA():
print("setting up dataset A")
yield DA
print("tearing down dataset A")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', DA_data_indices, ids="idx={}".format)
def data_from_datasetA(datasetA, data_index):
return datasetA[data_index]
# ------ Dataset B
DB = ['data1_b', 'data2_b']
DB_data_indices = list(range(len(DB)))
@pytest_fixture_plus(scope="module")
def datasetB():
print("setting up dataset B")
yield DB
print("tearing down dataset B")
@pytest_fixture_plus(scope="module")
@pytest.mark.parametrize('data_index', range(len(DB)), ids="idx={}".format)
def data_from_datasetB(datasetB, data_index):
return datasetB[data_index]
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_datasetA'),
fixture_ref('data_from_datasetB')])
def test_databases(data):
# do test
print(data)
Of course, you may wish to handle any number of datasets dynamically. In that case you have to generate all alternative fixtures dynamically, because pytest has to know in advance what is the number of tests to execute. This works quite well:
import pytest
from makefun import with_signature
from pytest_cases import pytest_parametrize_plus, pytest_fixture_plus, fixture_ref
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
datasets_indices = {dn: range(len(dc)) for dn, dc in datasets.items()}
# ------ Datasets fixture generation
def create_dataset_fixture(dataset_name):
@pytest_fixture_plus(scope="module", name=dataset_name)
def dataset():
print("setting up dataset %s" % dataset_name)
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
return dataset
def create_data_from_dataset_fixture(dataset_name):
@pytest_fixture_plus(name="data_from_%s" % dataset_name, scope="module")
@pytest.mark.parametrize('data_index', dataset_indices, ids="idx={}".format)
@with_signature("(%s, data_index)" % dataset_name)
def data_from_dataset(data_index, **kwargs):
dataset = kwargs.popitem()[1]
return dataset[data_index]
return data_from_dataset
for dataset_name, dataset_indices in datasets_indices.items():
globals()[dataset_name] = create_dataset_fixture(dataset_name)
globals()["data_from_%s" % dataset_name] = create_data_from_dataset_fixture(dataset_name)
# ------ Test
@pytest_parametrize_plus('data', [fixture_ref('data_from_%s' % n)
for n in datasets_indices.keys()])
def test_databases(data):
# do test
print(data)
Both provide the same output:
setting up dataset DA
data1_a
data2_a
data3_a
tearing down dataset DA
setting up dataset DB
data1_b
data2_b
tearing down dataset DB
EDIT: there might be a simpler solution if the setup/teardown procedure is the same for all datasets, with using param_fixtures
. I'll try to post that soon.
EDIT 2: actually the simpler solution I was referring to seems to lead to multiple setup/teardown as you already noted in the accepted answer:
from pytest_cases import pytest_fixture_plus, param_fixtures
# ------ Datasets
datasets = {
'DA': ['data1_a', 'data2_a', 'data3_a'],
'DB': ['data1_b', 'data2_b']
}
was_setup = {
'DA': False,
'DB': False
}
data_indices = {_dataset_name: list(range(len(_dataset_contents)))
for _dataset_name, _dataset_contents in datasets.items()}
param_fixtures("dataset_name, data_index", [(_dataset_name, _data_idx) for _dataset_name in datasets
for _data_idx in data_indices[_dataset_name]],
scope='module')
@pytest_fixture_plus(scope="module")
def dataset(dataset_name):
print("setting up dataset %s" % dataset_name)
assert not was_setup[dataset_name]
was_setup[dataset_name] = True
yield datasets[dataset_name]
print("tearing down dataset %s" % dataset_name)
@pytest_fixture_plus(scope="module")
def data(dataset, data_index):
return dataset[data_index]
# ------ Test
def test_databases(data):
# do test
print(data)
I opened a ticket on pytest-dev to better understand why: pytest-dev#5457
See documentation for details. (I'm the author by the way) )
来源:https://stackoverflow.com/questions/46909275/parametrizing-tests-depending-of-also-parametrized-values-in-pytest