python-unittest

pytest -> How to use fixture return value in test method under a class

不问归期 提交于 2019-11-29 03:49:18
问题 I have a fixture that returns a value like this: import pytest @pytest.yield_fixture(scope="module") def oneTimeSetUp(browser): print("Running one time setUp") if browser == 'firefox': driver = webdriver.Firefox() print("Running tests on FF") else: driver = webdriver.Chrome() print("Running tests on chrome") yield driver print("Running one time tearDown") This fixture gets the browser value from another fixture which is reading the command line option. Then I have a test class where I have

Unable to run unittest's main function in ipython/jupyter notebook

寵の児 提交于 2019-11-29 02:45:31
问题 I am giving an example which throws an error in ipython/jupyter notebook, but runs fine as an individual script. import unittest class Samples(unittest.TestCase): def testToPow(self): pow3 = 3**3 assert pow3==27 if __name__ == '__main__': unittest.main() The error is below: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-7-232db94ae8b2> in <module>() 8 9 if __name__ == '__main__': ---> 10 unittest.main

Skip unittest if some-condition in SetUpClass fails

ⅰ亾dé卋堺 提交于 2019-11-28 23:21:08
I was playing with pyUnit framework for unittest testing of my application. Is there any any way to skip all the tests in class if certain condition in setUpClass fails? Currently, I am setting up environment (creating resources, configuring global settings) in setUpClass. But, if any of these resource creation fails..I am raising exception. Instead of that I want to skip the whole test suite. Got the answer: For those who gets stuck here- unittest can be skipped from setUpClass in following way- raise unittest.SkipTest(message) studgeek Instead of explicitly raising the SkipTest exception,

AttributeError: 'module' object has no attribute 'tests'

可紊 提交于 2019-11-28 16:55:49
I'm running this command: python manage.py test project.apps.app1.tests and it causes this error: AttributeError: 'module' object has no attribute 'tests' Below is my directory structure. I've also added app1 to my installed apps config. Traceback (most recent call last): File "manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 385, in execute_from_command_line utility.execute() File "/home/username/local/dev/local/lib/python2.7/site-packages/django/core/management/_

Python Unit Testing: Automatically Running the Debugger when a test fails

三世轮回 提交于 2019-11-28 15:59:16
Is there a way to automatically start the debugger at the point at which a unittest fails? Right now I am just using pdb.set_trace() manually, but this is very tedious as I need to add it each time and take it out at the end. For Example: import unittest class tests(unittest.TestCase): def setUp(self): pass def test_trigger_pdb(self): #this is the way I do it now try: assert 1==0 except AssertionError: import pdb pdb.set_trace() def test_no_trigger(self): #this is the way I would like to do it: a=1 b=2 assert a==b #magically, pdb would start here #so that I could inspect the values of a and b

What is the difference between setUp() and setUpClass() in Python unittest?

 ̄綄美尐妖づ 提交于 2019-11-28 15:51:47
What is the difference between setUp() and setUpClass() in the Python unittest framework? Why would setup be handled in one method over the other? I want to understand what part of setup is done in the setUp() and setUpClass() functions, as well as with tearDown() and tearDownClass() . The difference manifests itself when you have more than one test method in your class. setUpClass and tearDownClass are run once for the whole class; setUp and tearDown are run before and after each test method. For example: class Example(unittest.TestCase): @classmethod def setUpClass(cls): print("setUpClass")

Suppress print output in unittests [duplicate]

时间秒杀一切 提交于 2019-11-28 07:00:53
问题 This question already has an answer here: Silence the stdout of a function in Python without trashing sys.stdout and restoring each function call 8 answers Edit: Please notice I'm using Python 2.6 (as tagged) Say I have the following: class Foo: def bar(self): print 'bar' return 7 And say I have the following unit test: import unittest class ut_Foo(unittest.TestCase): def test_bar(self): obj = Foo() res = obj.bar() self.assertEqual(res, 7) So if I run: unittest.main() I get: bar # <-- I don't

Trying to implement python TestSuite

我只是一个虾纸丫 提交于 2019-11-28 06:47:53
I have two test cases (two different files) that I want to run together in a Test Suite. I can get the tests to run just by running python "normally" but when I select to run a python-unit test it says 0 tests run. Right now I'm just trying to get at least one test to run correectly. import usertest import configtest # first test import unittest # second test testSuite = unittest.TestSuite() testResult = unittest.TestResult() confTest = configtest.ConfigTestCase() testSuite.addTest(configtest.suite()) test = testSuite.run(testResult) print testResult.testsRun # prints 1 if run "normally" Here

Persist variable changes between tests in unittest?

混江龙づ霸主 提交于 2019-11-28 06:12:38
How do I persist changes made within the same object inheriting from TestCase in unitttest? from unittest import TestCase, main as unittest_main class TestSimpleFoo(TestCase): foo = 'bar' def setUp(self): pass def test_a(self): self.assertEqual(self.foo, 'bar') self.foo = 'can' def test_f(self): self.assertEqual(self.foo, 'can') if __name__ == '__main__': unittest_main() I.e.: I want those two tests above to pass As some comments have echoed, structuring your tests in this manner is probably a design flaw in the tests themselves and you should consider restructuring them. However, if you want

How can I write unit tests against code that uses matplotlib?

情到浓时终转凉″ 提交于 2019-11-28 05:19:08
I'm working on a python (2.7) program that produce a lot of different matplotlib figure (the data are not random). I'm willing to implement some test (using unittest) to be sure that the generated figures are correct. For instance, I store the expected figure (data or image) in some place, I run my function and compare the result with the reference. Is there a way to do this ? In my experience , image comparison tests end up bring more trouble than they are worth. This is especially the case if you want to run continuous integration across multiple systems (like TravisCI) that may have