I have just started writing some unit tests for a python project I have using
coverage. I’m only currently testing a small proportion, but I am trying to work out the code coverage
I run my tests and get the coverage using the following
python -m unittest discover -s tests/ coverage run -m unittest discover -s tests/ coverage report -m
The problem I’m having is that
coverage is telling I have 44% code coverage and is only counting the files that:
were tested in the unit tests (i.e., all the files that were not tested are missing and not in the overall coverage)
were in the libraries in the virtual environment and code coverage of the actual tests too. Surely it should not be including the actual tests in the results?
Furthermore, it says the files that are actually tested in these unit tests only have the first few lines tested (which are in most cases the import statements)
How do I get a more realistic code coverage or is this how it is meant to be?
If you use
nose as a testrunner instead, the coverage plugin for it provides
--cover-inclusive Include all python files under working directory in coverage report. Useful for discovering holes in test coverage if not all files are imported by the test suite. [NOSE_COVER_INCLUSIVE] --cover-tests Include test modules in coverage report [NOSE_COVER_TESTS]
--source=. to the
coverage run line. It will both limit the focus to the current directory, and will search for
.py files that weren’t run at all.
--source=. won’t work if you don’t have explicit
__init__.py files in every directory containing
As Ned Batchelder pointed out in a comment to his answer, coverage.py looks for these
__init__.py files to know if the contents of directory are importable. (code src on github)
You can find this in coverage docs
add the following to coveragerc
[report] include_namespace_packages = True
(This might include some unwanted packages as well, use omit config to remove those)