I have some kind of test data and want to create a unit test for each item. My first idea was to do it like this:
import unittest
l = [[\"foo\", \"a\", \"a\
import unittest
def generator(test_class, a, b,c,d,name):
def test(self):
print('Testexecution=',name)
print('a=',a)
print('b=',b)
print('c=',c)
print('d=',d)
return test
def add_test_methods(test_class):
test_list = [[3,3,5,6, 'one'], [5,5,8,9, 'two'], [0,0,5,6, 'three'],[0,0,2,3,'Four']]
for case in test_list:
print('case=',case[0], case[1],case[2],case[3],case[4])
test = generator(test_class, case[0], case[1],case[2],case[3],case[4])
setattr(test_class, "test_%s" % case[4], test)
class TestAuto(unittest.TestCase):
def setUp(self):
print ('Setup')
pass
def tearDown(self):
print ('TearDown')
pass
add_test_methods(TestAuto)
if __name__ == '__main__':
unittest.main(verbosity=1)
I'd been having trouble with a very particular style of parameterized tests. All our Selenium tests can run locally, but they also should be able to be run remotely against several platforms on SauceLabs. Basically, I wanted to take a large amount of already-written test cases and parameterize them with the fewest changes to code possible. Furthermore, I needed to be able to pass the parameters into the setUp method, something which I haven't seen any solutions for elsewhere.
Here's what I've come up with:
import inspect
import types
test_platforms = [
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "10.0"},
{'browserName': "internet explorer", 'platform': "Windows 7", 'version': "11.0"},
{'browserName': "firefox", 'platform': "Linux", 'version': "43.0"},
]
def sauce_labs():
def wrapper(cls):
return test_on_platforms(cls)
return wrapper
def test_on_platforms(base_class):
for name, function in inspect.getmembers(base_class, inspect.isfunction):
if name.startswith('test_'):
for platform in test_platforms:
new_name = '_'.join(list([name, ''.join(platform['browserName'].title().split()), platform['version']]))
new_function = types.FunctionType(function.__code__, function.__globals__, new_name,
function.__defaults__, function.__closure__)
setattr(new_function, 'platform', platform)
setattr(base_class, new_name, new_function)
delattr(base_class, name)
return base_class
With this, all I had to do was add a simple decorator @sauce_labs() to each regular old TestCase, and now when running them, they're wrapped up and rewritten, so that all the test methods are parameterized and renamed. LoginTests.test_login(self) runs as LoginTests.test_login_internet_explorer_10.0(self), LoginTests.test_login_internet_explorer_11.0(self), and LoginTests.test_login_firefox_43.0(self), and each one has the parameter self.platform to decide what browser/platform to run against, even in LoginTests.setUp, which is crucial for my task since that's where the connection to SauceLabs is initialized.
Anyway, I hope this might be of help to someone looking to do a similar "global" parameterization of their tests!
I have found that this works well for my purposes, especially if I need to generate tests that do slightly difference processes on a collection of data.
import unittest
def rename(newName):
def renamingFunc(func):
func.__name__ == newName
return func
return renamingFunc
class TestGenerator(unittest.TestCase):
TEST_DATA = {}
@classmethod
def generateTests(cls):
for dataName, dataValue in TestGenerator.TEST_DATA:
for func in cls.getTests(dataName, dataValue):
setattr(cls, "test_{:s}_{:s}".format(func.__name__, dataName), func)
@classmethod
def getTests(cls):
raise(NotImplementedError("This must be implemented"))
class TestCluster(TestGenerator):
TEST_CASES = []
@staticmethod
def getTests(dataName, dataValue):
def makeTest(case):
@rename("{:s}".format(case["name"]))
def test(self):
# Do things with self, case, data
pass
return test
return [makeTest(c) for c in TestCluster.TEST_CASES]
TestCluster.generateTests()
The TestGenerator
class can be used to spawn different sets of test cases like TestCluster
.
TestCluster
can be thought of as an implementation of the TestGenerator
interface.
It can be done by using pytest. Just write the file test_me.py
with content:
import pytest
@pytest.mark.parametrize('name, left, right', [['foo', 'a', 'a'],
['bar', 'a', 'b'],
['baz', 'b', 'b']])
def test_me(name, left, right):
assert left == right, name
And run your test with command py.test --tb=short test_me.py
. Then the output will be looks like:
=========================== test session starts ============================
platform darwin -- Python 2.7.6 -- py-1.4.23 -- pytest-2.6.1
collected 3 items
test_me.py .F.
================================= FAILURES =================================
_____________________________ test_me[bar-a-b] _____________________________
test_me.py:8: in test_me
assert left == right, name
E AssertionError: bar
==================== 1 failed, 2 passed in 0.01 seconds ====================
It simple!. Also pytest has more features like fixtures
, mark
, assert
, etc ...
I came across ParamUnittest the other day when looking at the source code to radon (example usage on the github repo). It should work with other frameworks that extend TestCase (like Nose).
Here is an example:
import unittest
import paramunittest
@paramunittest.parametrized(
('1', '2'),
#(4, 3), <---- uncomment to have a failing test
('2', '3'),
(('4', ), {'b': '5'}),
((), {'a': 5, 'b': 6}),
{'a': 5, 'b': 6},
)
class TestBar(TestCase):
def setParameters(self, a, b):
self.a = a
self.b = b
def testLess(self):
self.assertLess(self.a, self.b)
Use the ddt library. It adds simple decorators for the test methods:
import unittest
from ddt import ddt, data
from mycode import larger_than_two
@ddt
class FooTestCase(unittest.TestCase):
@data(3, 4, 12, 23)
def test_larger_than_two(self, value):
self.assertTrue(larger_than_two(value))
@data(1, -3, 2, 0)
def test_not_larger_than_two(self, value):
self.assertFalse(larger_than_two(value))
This library can be installed with pip
. It doesn't require nose
, and works excellent with the standard library unittest
module.