How to see which tests were run during Django's manage.py test command

Question:

After tests execution is finished using Django’s manage.py test command only number of passed tests is printed to the console.

(virtualenv) G:Project>python manage.py test
Creating test database for alias 'default'...
True
..
----------------------------------------------------------------------
Ran 2 tests in 0.017s

OK
Destroying test database for alias 'default'...

Is there any way to see:

  1. which tests were actually executed
  2. from what module
  3. in what order

I haven’t found any solution in the doc.

Asked By: Mariusz Jamro

||

Answers:

You can pass -v 2 to the test command:

python manage.py test -v 2

After running this command you’ll get something like this (I’m using django 2, feel free to ignore migrations/database stuff):

Creating test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Operations to perform:
  Synchronize unmigrated apps: messages, staticfiles
  Apply all migrations: admin, auth, contenttypes, sessions
Synchronizing apps without migrations:
  Creating tables...
   Running deferred SQL...
Running migrations:
  Applying contenttypes.0001_initial... OK
  ...
  Applying sessions.0001_initial... OK
System check identified no issues (0 silenced).
test_equal_hard (polls.tests.TestHard) ... ok      <--------+
test_equal_simple (polls.tests.TestSimple) ... ok  <--------+
                                                            |
                                                            |
           That's your tests!  >----------------------------+

By the way, v stands for verbosity (You can also use --verbosity=2):

python manage.py test --verbosity=2

Here’s the excerpt from the python manage.py test --help:

-v {0,1,2,3}, –verbosity {0,1,2,3}

Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output

Answered By: Nigel Tufnel

Nigel’s answer is great and definitely the lowest barrier to entry option. However, you can get even better feedback with django_nose (and it’s not that difficult to setup ;).

The below is from: BDD with Python

First: install some requirements:

pip install nose pinocchio django_nose

Then add the following to settings.py

TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
NOSE_ARGS = ['--with-spec', '--spec-color']

Then run your tests as per normal:

python manage.py test

Output should look something like this:

enter image description here

Note: The comments under your tests can be used to give even better output than just the name.

e.g.:

def test_something(self):
    """Something should happen"""
    ...

Will output “Something should happen” when running the test.

For extra points: You can also generate / output your code coverage:

pip install coverage

Add the following to your NOSE_ARGS in settings.py: '--with-coverage', '--cover-html', '--cover-package=.', '--cover-html-dir=reports/cover'

e.g.:

NOSE_ARGS = ['--with-spec', '--spec-color', 
         '--with-coverage', '--cover-html', 
         '--cover-package=.', '--cover-html-dir=reports/cover']

Then you’ll get a nice code-coverage summary when you run python manage.py test as well as a neat html report in reports/cover

Answered By: toast38coza