Is it possible to put common checks for tests separately in pytest?

Question:

I have a lot of negative tests with step that checks there had no effect to my system. To make sure that, added time.sleep() and then checks the system.
Example:

import pytest
import time

class TestIncorrect:
    def test_incorrect_table():
        print("Step 1:")
        print("Do something illegal. e.g. write incorrect Redis table")
        write_redis(correct_creds, incorrect_table, f"new record: {uuid()}")

        print("Step 2: Check illegal actions had no effect")
        print("Wait for there is no new records")
        time.sleep(LONG_TIME)
        assert read_redis(correct_creds, correct_table) == [''], "Found new record in Redis"

    def test_incorrect_pass():
        print("Step 1:")
        print("Do something illegal. e.g. use incorrect Redis pass")
        write_redis(incorrect_creds, correct_table, f"new record: {uuid()}")

        print("Step 2: Check illegal actions had no effect")
        print("Wait for there is no new records")
        time.sleep(LONG_TIME)
        assert read_redis(correct_creds, correct_table) == [''], "Found new record in Redis"

That approach is good for few tests. When the number of tests tends to a large number, the execution time tends to (large number) * LONG_TIME.
We know that illegal action (incorrect params) does not allow to write to Redis. So what if put "Step 2" separately to check that Redis is empty after all tests?
I mean:

  1. Run test_incorrect_table:
    • "Step 1"
    • Add test params: [test case, uuid] to check_redis_is_empty queue.
  2. Run test_incorrect_pass, "Step 1"
    • "Step 1"
    • Add test params: [test case, uuid] to check_redis_is_empty queue.
      … repeat for all tests
  3. Run common fixture "check_redis_is_empty"
    • time.sleep(LONG_TIME)
    • Get Redis records, find uuids from the queue:
      • If uuid is in Redis records, mark test case as Fail.
      • If uuid is not in Redis, pass or mark test case as Pass additionally.

That decreases time from tests_num * LONG_TIME to LONG_TIME.
Is it possible? Are there other solutions?

I know @pytest.hookimpl(hookwrapper=True, trylast=True) can be used after test execution. But that called after each test.

Asked By: Andrey

||

Answers:

You should not try to change test’s result state after it executed.

Failures of teardowns are considered separately from tests even for function scoped fixtures. So any implementation that marks test as failed outside of this test (e.g. session scoped teardown) will be quite hacky and not a best practice.

May be you should think of some instant confirmation that write_redis function made no effect. Doesn’t this method return some response from redis?

UPD:

Here is some example of fixture with scope='session'.
It returns some list that will be shared between all tests and it expects tests to fill it with some values, containing test name (which is request.node.nodeid) and assertion (lambda functions) that needs to be done later. I used namedtuple for it but you can use any preferable type like dict or tuple.

In teardown of fixture (after last test of session executed) all assertions inside this list are performed and some cumulative error message can be generated. I prefer to use assertpy.soft_assertions() for things like that but it’s up to you.

So if some of lambdas returns False then you will get corresponding test name in error message of teardown. Note that it will not fail the test case itself.

from typing import List

import pytest
from collections import namedtuple
from assertpy import soft_assertions, assert_that


TestAssertion = namedtuple("TestAssertion", "test assertion")


def read_redis(*args) -> List[str]:
    """Mock function of read_redis for my example"""
    if "empty" in args:
        return [""]
    else:
        return ["new record"]


@pytest.fixture(scope="session")
def check_later():
    """Session scoped fixture that returns list of checks"""
    to_check: List[TestAssertion] = []

    yield to_check

    with soft_assertions():
        for test_assertion in to_check:
            assert_that(test_assertion.assertion(), test_assertion.test).is_true()


def test_fail_1(check_later, request):
    # some code like incorrect write_redis
    check_later.append(TestAssertion(request.node.nodeid, lambda: read_redis("not_empty") == [""]))


def test_pass_1(check_later, request):
    # some code like incorrect write_redis
    check_later.append(TestAssertion(request.node.nodeid, lambda: read_redis("empty") == [""]))


def test_fail_2(check_later, request):
    # some code like incorrect write_redis
    check_later.append(TestAssertion(request.node.nodeid, lambda: read_redis("not_empty") == [""]))

Output will contain all passed tests and one failed teardown:

collected 3 items

test_teardown_failuretest_incorrect.py ...E  [100%]

=== ERRORS ===
___ ERROR at teardown of test_fail_2 ___

    @pytest.fixture(scope="session")
    def check_later():
        """Session scoped fixture that returns list of checks"""
        to_check: List[TestAssertion] = []

        yield to_check

>       with soft_assertions():
E       AssertionError: soft assertion failures:
E       1. [test_teardown_failure/test_incorrect.py::test_fail_1] Expected <True>, but was not.
E       2. [test_teardown_failure/test_incorrect.py::test_fail_2] Expected <True>, but was not.

test_teardown_failuretest_incorrect.py:26: AssertionError

UPD2:

The way to mark tests as failed in the end of session based on session teardown fixture I described above:

conftest.py

import pytest
from _pytest.stash import StashKey

failed_tests = StashKey()
test_reports = StashKey()


def pytest_sessionstart(session):
    session.config.stash[test_reports] = []


def pytest_report_teststatus(report, config):
    if report.when == "call":
        config.stash.get(test_reports, []).append(report)


def pytest_sessionfinish(session, exitstatus):
    reports = session.config.stash.get(test_reports, [])
    tests_to_fail = session.config.stash.get(failed_tests, [])
    for report in [r for r in reports if r.nodeid in tests_to_fail]:
        report.outcome = "failed"


@pytest.fixture(scope="session")
def check_later(pytestconfig):
    """Session scoped fixture that returns list of checks"""
    pytestconfig.stash[failed_tests] = []
    to_check = []

    yield to_check

    for test_assertion in to_check:
        if not test_assertion.assertion():
            pytestconfig.stash[failed_tests].append(test_assertion.test)

So running pytest -rA on my test examples will give following output:

collected 3 items

test_teardown_failuretest_incorrect.py ...                                                                                                                                                                                      [100%]

==== PASSES ===
=== short test summary info ===
FAILED test_teardown_failure/test_incorrect.py::test_fail_1
PASSED test_teardown_failure/test_incorrect.py::test_pass_1
FAILED test_teardown_failure/test_incorrect.py::test_fail_2
=== 3 passed, 1 warning in 0.04s ===

As you can see in the end it still shows 3 passed, but 2 tests are actually failed. To change this behavior you have to go even deeper and define your custom pytest terminalreporter and change the way it counts results. IMHO do not worth it at all 🙂

Answered By: pL3b
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.