Unit-testing with dependencies between tests

Question:

How do you do unit testing when you have

  • some general unit tests
  • more sophisticated tests checking edge cases, depending on the general ones

To give an example, imagine testing a CSV-reader (I just made up a notation for demonstration),

def test_readCsv(): ...

@dependsOn(test_readCsv)
def test_readCsv_duplicateColumnName(): ...

@dependsOn(test_readCsv)
def test_readCsv_unicodeColumnName(): ...

I expect sub-tests to be run only if their parent test succeeds. The reason behind this is that running these tests takes time. Many failure reports that go back to a single reason wouldn’t be informative, either. Of course, I could shoehorn all edge-cases into the main test, but I wonder if there is a more structured way to do this.

I’ve found these related but different questions,

UPDATE:

I’ve found TestNG which has great built-in support for test dependencies. You can write tests like this,

@Test{dependsOnMethods = ("test_readCsv"))
public void test_readCsv_duplicateColumnName() {
   ...
}
Asked By: Adam Schmideg

||

Answers:

I’m not sure what language you’re referring to (as you don’t specifically mention it in your question) but for something like PHPUnit there is an @depends tag that will only run a test if the depended upon test has already passed.

Depending on what language or unit testing you use there may also be something similar available

Answered By: Dave

Personally, I wouldn’t worry about creating dependencies between unit tests. This sounds like a bit of a code smell to me. A few points:

  • If a test fails, let the others fail to and get a good idea of the scale of the problem that the adverse code change made.
  • Test failures should be the exception rather than the norm, so why waste effort and create dependencies when the vast majority of the time (hopefully!) no benefit is derived? If failures happen often, your problem is not with unit test dependencies but with frequent test failures.
  • Unit tests should run really fast. If they are running slow, then focus your efforts on increasing the speed of these tests rather than preventing subsequent failures. Do this by decoupling your code more and using dependency injection or mocking.
Answered By: Chris Knight

According to best practices and unit testing principles unit test should not depend on other ones.

Each test case should check concrete isolated behavior.

Then if some test case fail you will know exactly what became wrong with our code.

Answered By: Andriy Sholokh

Proboscis is a python version of TestNG (which is a Java library).

See packages.python.org/proboscis/

It supports dependencies, e.g.

@test(depends_on=[test_readCsv])
public void test_readCsv_duplicateColumnName() {
   ...
}
Answered By: bo198214

I have implemented a plugin for Nose (Python) which adds support for test dependencies and test prioritization.

As mentioned in the other answers/comments this is often a bad idea, however there can be exceptions where you would want to do this (in my case it was performance for integration tests – with a huge overhead for getting into a testable state, minutes vs hours).

You can find it here: nosedep.

A minimal example is:

def test_a:
  pass

@depends(before=test_a)
def test_b:
  pass

To ensure that test_b is always run before test_a.

Answered By: Zitrax

You may want use pytest-dependency. According to theirs documentation code looks elegant:

import pytest

@pytest.mark.dependency()
@pytest.mark.xfail(reason="deliberate fail")
def test_a():
    assert False

@pytest.mark.dependency()
def test_b():
    pass

@pytest.mark.dependency(depends=["test_a"])
def test_c():
    pass

@pytest.mark.dependency(depends=["test_b"])
def test_d():
    pass

@pytest.mark.dependency(depends=["test_b", "test_c"])
def test_e():
    pass

Please note, it is plugin for pytest, not unittest which is part of python itself. So, you need 2 more dependencies (f.e. add into requirements.txt):

pytest==5.1.1
pytest-dependency==0.4.0
Answered By: Hubbitus