How to skip a pytest using an external fixture?

Question:

Background

I am running a py.test with a fixture in a conftest file. You can see the code below(this all works fine):

example_test.py

import pytest

@pytest.fixture
def platform():
    return "ios"

@pytest.mark.skipif("platform == 'ios'")
def test_ios(platform):
    if platform != 'ios':
        raise Exception('not ios')

def test_android_external(platform_external):
    if platform_external != 'android':
        raise Exception('not android')

conftest.py

import pytest

@pytest.fixture
def platform_external():
    return "android"

Problem

Now I want to be able to skip some tests that do not apply to my current test-run. In my example I am running tests either for iOS or Android (This is just for demonstration purposes only and could be any other expression).

Unfortunately I cannot get ahold of (my externally defined fixture) platform_external in the skipif statement. When I run the code below I receive the following exception: NameError: name 'platform_external' is not defined. I don’t know if this is a py.test bug as locally defined fixtures are working.

add-on for example_test.py

@pytest.mark.skipif("platform_external == 'android'")
def test_android(platform_external):
    """This test will fail as 'platform_external' is not available in the decorator.
    It is only available for the function parameter."""
    if platform_external != 'android':
        raise Exception('not android')

So I thought I will just create my own decorator, just to see that it won’t receive the fixtures as parameters:

from functools import wraps

def platform_custom_decorator(func):
    @wraps(func)
    def func_wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return func_wrapper

@platform_custom_decorator
def test_android_2(platform_external):
    """This test will also fail as 'platform_external' will not be given to the 
    decorator."""
    if platform_external != 'android':
        raise Exception('not android')

Question

How can I define a fixture in a conftest file and use it to (conditionally) skip a test?

Asked By: Marco Pashkov

||

Answers:

It seems py.test doesn’t use the test fixtures when evaluating the expression for skipif. By your example, test_ios is actually successful because it is comparing the function platform found in the module’s namespace to the "ios" string, which evaluates to False hence the test is executed and succeeds. If pytest was inserting the fixture for evaluation as you expect, that test should have been skipped.

A solution to your problem (not to your question though) would be to implement a fixture that inspects marks into the tests, and skips them accordingly:

# conftest.py
import pytest

@pytest.fixture
def platform():
    return "ios"

@pytest.fixture(autouse=True)
def skip_by_platform(request, platform):
    if request.node.get_closest_marker('skip_platform'):
        if request.node.get_closest_marker('skip_platform').args[0] == platform:
            pytest.skip('skipped on this platform: {}'.format(platform))   

A key point is the autouse parameter, which would make that fixture to be automatically included by all tests. Then your tests can mark which platforms to skip like this:

@pytest.mark.skip_platform('ios')
def test_ios(platform, request):
    assert 0, 'should be skipped'
Answered By: Bruno Oliveira

I had a similar problem and I don’t know if this is still relevant for you, but I might have found a workaround that would do what you want.

The idea is to extend the MarkEvaluator class and override the _getglobals method to force to add fixture values in the global set used by evaluator:

conftest.py

from _pytest.skipping import MarkEvaluator

class ExtendedMarkEvaluator(MarkEvaluator):
    def _getglobals(self):
        d = super()._getglobals()
        d.update(self.item._request._fixture_values)
        return d

add a hook to test calls:

def pytest_runtest_call(item):
    evalskipif = ExtendedMarkEvaluator(item, "skipif_call")
    if evalskipif.istrue():
        pytest.skip('[CANNOT RUN]' + evalskipif.getexplanation())

then you can use the marker skipif_call in your test case:

test_example.py

class Machine():
   def __init__(self, state):
      self.state = state

@pytest.fixture
def myfixture(request):
   return Machine("running")

@pytest.mark.skipif_call('myfixture.state != "running"')
def test_my_fixture_running_success(myfixture):
   print(myfixture.state)
   myfixture.state = "stopped"
   assert True

@pytest.mark.skipif_call('myfixture.state != "running"')
def test_my_fixture_running_fail(myfixture):
   print(myfixture.state)
   assert False

@pytest.mark.skipif_call('myfixture.state != "stopped"')
def test_my_fixture_stopped_success(myfixture):
   print(myfixture.state)
   myfixture.state = "running"

@pytest.mark.skipif_call('myfixture.state != "stopped"')
def test_my_fixture_stopped_fail(myfixture):
   print(myfixture.state)
   assert False

Run

pytest -v --tb=line
============================= test session starts =============================
[...]
collected 4 items

test_example.py::test_my_fixture_running_success PASSED
test_example.py::test_my_fixture_running_fail FAILED
test_example.py::test_my_fixture_stopped_success PASSED
test_example.py::test_my_fixture_stopped_fail FAILED

================================== FAILURES ===================================
C:test_example.py:21: assert False
C:test_example.py:31: assert False
===================== 2 failed, 2 passed in 0.16 seconds ======================

Problem

Unfortunately, this works only once for each evaluation expression since MarkEvaluator uses cached eval based on expression as key, so the next time the same expression will be tested, the result will be the cached value.

Solution

The expression is evaluated in the _istrue method. Unfortunately there is no way to configure the evaluator to avoid caching results.
The only way to avoid caching is to override the _istrue method to not use the cached_eval function:

class ExtendedMarkEvaluator(MarkEvaluator):
    def _getglobals(self):
        d = super()._getglobals()
        d.update(self.item._request._fixture_values)
        return d

    def _istrue(self):
        if self.holder:
            self.result = False
            args = self.holder.args
            kwargs = self.holder.kwargs
            for expr in args:
                import _pytest._code
                self.expr = expr
                d = self._getglobals()
                # Non cached eval to reload fixture values
                exprcode = _pytest._code.compile(expr, mode="eval")
                result = eval(exprcode, d)

                if result:
                    self.result = True
                    self.reason = expr
                    self.expr = expr
                    break
            return self.result
        return False

Run

pytest -v --tb=line
============================= test session starts =============================
[...]
collected 4 items

test_example.py::test_my_fixture_running_success PASSED
test_example.py::test_my_fixture_running_fail SKIPPED
test_example.py::test_my_fixture_stopped_success PASSED
test_example.py::test_my_fixture_stopped_fail SKIPPED

===================== 2 passed, 2 skipped in 0.10 seconds =====================

Now the tests are skipped because ‘myfixture’ value has been updated.

Hope it helps.

Cheers

Alex

Answered By: Alexper

Using inspiration from this answer to another SO question, I am using this approach to this problem which works well:

import pytest

@pytest.fixture(scope='session')
def requires_something(request):
    something = 'a_thing'
    if request.param != something:
        pytest.skip(f"Test requires {request.param} but environment has {something}")


@pytest.mark.parametrize('requires_something',('something_else',), indirect=True)
def test_indirect(requires_something):
    print("Executing test: test_indirect")

Answered By: Gregory Kuhn

The Solution from Bruno Oliveira is working, but for new pytest (>= 3.5.0) you need to add the pytest_configure:


# conftest.py
import pytest

@pytest.fixture
def platform():
    return "ios"

@pytest.fixture(autouse=True)
def skip_by_platform(request, platform):
    if request.node.get_closest_marker('skip_platform'):
        if request.node.get_closest_marker('skip_platform').args[0] == platform:
            pytest.skip('skipped on this platform: {}'.format(platform))   

def pytest_configure(config):
  config.addinivalue_line(
        "markers", "skip_by_platform(platform): skip test for the given search engine",
  )

Use:

@pytest.mark.skip_platform('ios')
def test_ios(platform, request):
    assert 0, 'should be skipped' 
Answered By: user1921483