Can Python's unittest test in parallel, like nose can?
Question:
Python’s NOSE testing framework has the concept of running multiple tests in parallel.
The purpose of this is not to test concurrency in the code, but to make tests for code that has “no side-effects, no ordering issues, and no external dependencies” run faster. The performance gain comes from concurrent I/O waits when they are accessing different devices, better use of multi CPUs/cores, and by running time.sleep() statements in parallel.
I believe the same thing could be done with Python’s unittest testing framework, by having a plugin Test Runner.
Has anyone had any experience with such a beast, and can they make any recommendations?
Answers:
Python unittest’s builtin testrunner does not run tests in parallel. It probably wouldn’t be too hard write one that did. I’ve written my own just to reformat the output and time each test. That took maybe 1/2 a day. I think you can swap out the TestSuite class that is used with a derived one that uses multiprocess without much trouble.
The testtools package is an extension of unittest which supports running tests concurrently. It can be used with your old test classes that inherit unittest.TestCase
.
For example:
import unittest
import testtools
class MyTester(unittest.TestCase):
# Tests...
suite = unittest.TestLoader().loadTestsFromTestCase(MyTester)
concurrent_suite = testtools.ConcurrentStreamTestSuite(lambda: ((case, None) for case in suite))
concurrent_suite.run(testtools.StreamResult())
If this is what you did initially
runner = unittest.TextTestRunner()
runner.run(suite)
—————————————–
replace it with
from concurrencytest import ConcurrentTestSuite, fork_for_tests
concurrent_suite = ConcurrentTestSuite(suite, fork_for_tests(4))
runner.run(concurrent_suite)
Please use pytest-xdist, if you want parallel run.
The pytest-xdist plugin extends py.test with some unique test execution modes:
- test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.
[…]
More info: Rohan Dunham’s blog
If you only need Python3 suport, consider using my fastunit.
I just change few code of unittest, making test case run as coroutines.
It really saved my time.
I just finished it last week, and may not testing enough, if any error happens, please let me know, so that I can make it better, thanks!
Another option that might be easier, if you don’t have that many test cases and they are not dependent, is to kick off each test case manually in a separate process.
For instance, open up a couple tmux sessions and then kick off a test case in each session using something like:
python -m unittest -v MyTestModule.MyTestClass.test_n
You can override the unittest.TestSuite
and implement some concurrency paradigm. Then, you use your customized TestSuite
class just like normal unittest
. In the following example, I implement my customized TestSuite
class using async
:
import unittest
import asyncio
class CustomTestSuite(unittest.TestSuite):
def run(self, result, debug=False):
"""
We override the 'run' routine to support the execution of unittest in parallel
:param result:
:param debug:
:return:
"""
topLevel = False
if getattr(result, '_testRunEntered', False) is False:
result._testRunEntered = topLevel = True
asyncMethod = []
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
for index, test in enumerate(self):
asyncMethod.append(self.startRunCase(index, test, result))
if asyncMethod:
loop.run_until_complete(asyncio.wait(asyncMethod))
loop.close()
if topLevel:
self._tearDownPreviousClass(None, result)
self._handleModuleTearDown(result)
result._testRunEntered = False
return result
async def startRunCase(self, index, test, result):
def _isnotsuite(test):
"A crude way to tell apart testcases and suites with duck-typing"
try:
iter(test)
except TypeError:
return True
return False
loop = asyncio.get_event_loop()
if result.shouldStop:
return False
if _isnotsuite(test):
self._tearDownPreviousClass(test, result)
self._handleModuleFixture(test, result)
self._handleClassSetUp(test, result)
result._previousTestClass = test.__class__
if (getattr(test.__class__, '_classSetupFailed', False) or
getattr(result, '_moduleSetUpFailed', False)):
return True
await loop.run_in_executor(None, test, result)
if self._cleanup:
self._removeTestAtIndex(index)
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
suite = CustomTestSuite()
suite.addTest(TestStringMethods('test_upper'))
suite.addTest(TestStringMethods('test_isupper'))
suite.addTest(TestStringMethods('test_split'))
unittest.TextTestRunner(verbosity=2).run(suite)
In the main
, I just construct my customized TestSuite
class CustomTestSuite
, add all the test cases, and finally run it.
Python’s NOSE testing framework has the concept of running multiple tests in parallel.
The purpose of this is not to test concurrency in the code, but to make tests for code that has “no side-effects, no ordering issues, and no external dependencies” run faster. The performance gain comes from concurrent I/O waits when they are accessing different devices, better use of multi CPUs/cores, and by running time.sleep() statements in parallel.
I believe the same thing could be done with Python’s unittest testing framework, by having a plugin Test Runner.
Has anyone had any experience with such a beast, and can they make any recommendations?
Python unittest’s builtin testrunner does not run tests in parallel. It probably wouldn’t be too hard write one that did. I’ve written my own just to reformat the output and time each test. That took maybe 1/2 a day. I think you can swap out the TestSuite class that is used with a derived one that uses multiprocess without much trouble.
The testtools package is an extension of unittest which supports running tests concurrently. It can be used with your old test classes that inherit unittest.TestCase
.
For example:
import unittest
import testtools
class MyTester(unittest.TestCase):
# Tests...
suite = unittest.TestLoader().loadTestsFromTestCase(MyTester)
concurrent_suite = testtools.ConcurrentStreamTestSuite(lambda: ((case, None) for case in suite))
concurrent_suite.run(testtools.StreamResult())
If this is what you did initially
runner = unittest.TextTestRunner()
runner.run(suite)
—————————————–
replace it with
from concurrencytest import ConcurrentTestSuite, fork_for_tests
concurrent_suite = ConcurrentTestSuite(suite, fork_for_tests(4))
runner.run(concurrent_suite)
Please use pytest-xdist, if you want parallel run.
The pytest-xdist plugin extends py.test with some unique test execution modes:
- test run parallelization: if you have multiple CPUs or hosts you can use those for a combined test run. This allows to speed up development or to use special resources of remote machines.
[…]
More info: Rohan Dunham’s blog
If you only need Python3 suport, consider using my fastunit.
I just change few code of unittest, making test case run as coroutines.
It really saved my time.
I just finished it last week, and may not testing enough, if any error happens, please let me know, so that I can make it better, thanks!
Another option that might be easier, if you don’t have that many test cases and they are not dependent, is to kick off each test case manually in a separate process.
For instance, open up a couple tmux sessions and then kick off a test case in each session using something like:
python -m unittest -v MyTestModule.MyTestClass.test_n
You can override the unittest.TestSuite
and implement some concurrency paradigm. Then, you use your customized TestSuite
class just like normal unittest
. In the following example, I implement my customized TestSuite
class using async
:
import unittest
import asyncio
class CustomTestSuite(unittest.TestSuite):
def run(self, result, debug=False):
"""
We override the 'run' routine to support the execution of unittest in parallel
:param result:
:param debug:
:return:
"""
topLevel = False
if getattr(result, '_testRunEntered', False) is False:
result._testRunEntered = topLevel = True
asyncMethod = []
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
for index, test in enumerate(self):
asyncMethod.append(self.startRunCase(index, test, result))
if asyncMethod:
loop.run_until_complete(asyncio.wait(asyncMethod))
loop.close()
if topLevel:
self._tearDownPreviousClass(None, result)
self._handleModuleTearDown(result)
result._testRunEntered = False
return result
async def startRunCase(self, index, test, result):
def _isnotsuite(test):
"A crude way to tell apart testcases and suites with duck-typing"
try:
iter(test)
except TypeError:
return True
return False
loop = asyncio.get_event_loop()
if result.shouldStop:
return False
if _isnotsuite(test):
self._tearDownPreviousClass(test, result)
self._handleModuleFixture(test, result)
self._handleClassSetUp(test, result)
result._previousTestClass = test.__class__
if (getattr(test.__class__, '_classSetupFailed', False) or
getattr(result, '_moduleSetUpFailed', False)):
return True
await loop.run_in_executor(None, test, result)
if self._cleanup:
self._removeTestAtIndex(index)
class TestStringMethods(unittest.TestCase):
def test_upper(self):
self.assertEqual('foo'.upper(), 'FOO')
def test_isupper(self):
self.assertTrue('FOO'.isupper())
self.assertFalse('Foo'.isupper())
def test_split(self):
s = 'hello world'
self.assertEqual(s.split(), ['hello', 'world'])
# check that s.split fails when the separator is not a string
with self.assertRaises(TypeError):
s.split(2)
if __name__ == '__main__':
suite = CustomTestSuite()
suite.addTest(TestStringMethods('test_upper'))
suite.addTest(TestStringMethods('test_isupper'))
suite.addTest(TestStringMethods('test_split'))
unittest.TextTestRunner(verbosity=2).run(suite)
In the main
, I just construct my customized TestSuite
class CustomTestSuite
, add all the test cases, and finally run it.