I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure:
As of Python 2.7, test discovery is automated in the unittest package. From the docs:
Unittest supports simple test discovery. In order to be compatible with test discovery, all of the test files must be modules or packages importable from the top-level directory of the project (this means that their filenames must be valid identifiers).
Test discovery is implemented in
TestLoader.discover()
, but can also be used from the command line. The basic command-line usage is:cd project_directory python -m unittest discover
By default it looks for packages named test*.py
, but this can be changed so you might use something like
python -m unittest discover --pattern=*.py
In place of your test.py script.
I knew there was an obvious solution:
dirFoo\
__init__.py
test.py
dirBar\
__init__.py
Foo.py
Bar.py
Contents of dirFoo/test.py
from dirBar import *
import unittest
if __name__ == "__main__":
unittest.main()
Run the tests:
$ python test.py
...........
----------------------------------------------------------------------
Ran 11 tests in 2.305s
OK
I came up with a snippet that may do what you want. It walks a path that you provide looking for Python packages/modules and accumulates a set of test suites from those modules, which it then executes all at once.
The nice thing about this is that it will work on all packages nested under the directory you specify, and you won't have to manually change the imports as you add new components.
import logging
import os
import unittest
MODULE_EXTENSIONS = set('.py .pyc .pyo'.split())
def unit_test_extractor(tup, path, filenames):
"""Pull ``unittest.TestSuite``s from modules in path
if the path represents a valid Python package. Accumulate
results in `tup[1]`.
"""
package_path, suites = tup
logging.debug('Path: %s', path)
logging.debug('Filenames: %s', filenames)
relpath = os.path.relpath(path, package_path)
relpath_pieces = relpath.split(os.sep)
if relpath_pieces[0] == '.': # Base directory.
relpath_pieces.pop(0) # Otherwise, screws up module name.
elif not any(os.path.exists(os.path.join(path, '__init__' + ext))
for ext in MODULE_EXTENSIONS):
return # Not a package directory and not the base directory, reject.
logging.info('Base: %s', '.'.join(relpath_pieces))
for filename in filenames:
base, ext = os.path.splitext(filename)
if ext not in MODULE_EXTENSIONS: # Not a Python module.
continue
logging.info('Module: %s', base)
module_name = '.'.join(relpath_pieces + [base])
logging.info('Importing from %s', module_name)
module = __import__(module_name)
module_suites = unittest.defaultTestLoader.loadTestsFromModule(module)
logging.info('Got suites: %s', module_suites)
suites += module_suites
def get_test_suites(path):
""":return: Iterable of suites for the packages/modules
present under :param:`path`.
"""
logging.info('Base path: %s', package_path)
suites = []
os.path.walk(package_path, unit_test_extractor, (package_path, suites))
logging.info('Got suites: %s', suites)
return suites
if __name__ == '__main__':
logging.basicConfig(level=logging.WARN)
package_path = os.path.dirname(os.path.abspath(__file__))
suites = get_test_suites(package_path)
for suite in suites:
unittest.TextTestRunner(verbosity=2).run(suite)
Here is my test discovery code that seems to do the job. I wanted to make sure I can extend the tests easily without having to list them in any of the involved files, but also avoid writing all tests in one single Übertest file.
So the structure is
myTests.py
testDir\
__init__.py
testA.py
testB.py
myTest.py look like this:
import unittest
if __name__ == '__main__':
testsuite = unittest.TestLoader().discover('.')
unittest.TextTestRunner(verbosity=1).run(testsuite)
I believe this is the simplest solution for writing several test cases in one directory. The solution requires Python 2.7 or Python 3.
You should try nose. It's a library to help create tests and it integrates with unittest
or doctest
. All you need to do is run nosetests
and it'll find all your unittests for you.
% nosetests # finds all tests in all subdirectories
% nosetests tests/ # find all tests in the tests directory
In case it happens to help anyone, here is the approach I arrived at for solving this problem. I had the use case where I have the following directory structure:
mypackage/
tests/
test_category_1/
tests_1a.py
tests_1b.py
...
test_category_2/
tests_2a.py
tests_2b.py
...
...
and I want all of the following to work in the obvious way and to be able to be supplied the same commandline arguments as are accepted by unittest:
python -m mypackage.tests
python -m mypackage.tests.test_category_1
python -m mypackage.tests.test_category_1.tests_1a
The solution was to set up mypackage/tests/__init__.py
like this:
import unittest
def prepare_load_tests_function (the__path__):
test_suite = unittest.TestLoader().discover(the__path__[0])
def load_tests (_a, _b, _c):
return test_suite
return load_tests
and to set up mypackage/tests/__main__.py
like this:
import unittest
from . import prepare_load_tests_function, __path__
load_tests = prepare_load_tests_function(__path__)
unittest.main()
and to copy and paste an empty __init__.py
and the following __main__.py
in each mypackage/tests/test_category_n/
:
import unittest
from .. import prepare_load_tests_function
from . import __path__
load_tests = prepare_load_tests_function(__path__)
unittest.main()
and also to add the standard if __name__ == '__main__': unittest.main()
in each actual tests file.
(Works for me on Python 3.3 on Windows, ymmv.)