pytest-testmon icon indicating copy to clipboard operation
pytest-testmon copied to clipboard

specifying explicit dependency of a test case on a file

Open tarpas opened this issue 9 years ago • 4 comments

This would be needed for data files or other files which influence the execution of tests but don't contain normal python lines of code.

One option how to specify the dependency of a test would be to preceed them with a decorator.

@testmon.depends_on('../testmon/plugin.py')
def test_example(....

Another possible option would be a pragma option.

testmon has to merge this explicit information to the dependency data acquired from coverage.py

In the first phase the granularity of a whole file would need to suffice. Whenever the file modify time changes, dependent tests would be re-executed.

tarpas avatar Mar 31 '15 14:03 tarpas

See one of the use cases: https://github.com/tarpas/pytest-testmon/issues/49

tarpas avatar Nov 05 '16 11:11 tarpas

I assumed it's not a duplicate of #49, because in #49 I want to dynamically add that dependency (when a test retrieves some data through a specific ORM class), and here it's about setting it statically for each test. But now I'm thinking having a hook for "add dependency on file X to currently running test" would work too, for both the test decorator and my use case.

ktosiek avatar Nov 05 '16 12:11 ktosiek

I think something like this would be helpful for me too.

Providing a programmatic way to:

  1. Mark a test as "dirty" and needs to be rerun
  2. Get whether a test is "dirty" and hence needs to be rerun would be great. (not sure if it's there already)

Cause then I can write logic in my conftest to decide when to mark the test to be run. For example:

# conftest.py

def pytest_collection_modifyitems(session, config, items):
    for item in items:
        if uses_a_file_that_is_modified(item):  # Case 1
            mark_item_for_run(item)
        if any(is_dirty(i) for i in item.depends_on):  # Case 2
            mark_item_for_run(item)

I have some usecases where this could be helpful for me.

Case1: Depending on resource files (https://github.com/tarpas/pytest-testmon/issues/178) I have a fixture for resources - so, I can detect the fixture is being used or not and if my resource folder has been "modified" - run it. I don't mind writing this logic out myself cause I understand testmon does not want to support this

Case 2: With pytest-order link With pytest-order I have some projects that use @pytest.mark.order(before="test_first") to enforce dependencies across my tests So, I can mark test_two should be run if test_one is marked as dirty

AbdealiLoKo avatar Oct 28 '22 04:10 AbdealiLoKo

This would be useful to me as well. The use case is an integration test where there's a 1-1 mapping between module/file and test, but Coverage can't trace it since the module is executed as a subprocess.

alexrudd2 avatar Apr 25 '23 14:04 alexrudd2