mitsuba3 icon indicating copy to clipboard operation
mitsuba3 copied to clipboard

Skip unavailable tests instead of removing them

Open dvicini opened this issue 2 months ago • 4 comments

The latest conftest.py removes tests with unavailable variants, as explained in #1659. This makes sense, as the code previously used a somewhat inefficient way to skip tests with missing variants.

However, this seems to cause a minor issue with the latest pytest versions (>= 8.4.1). With these, test files without any valid tests now produce a non-zero exit code (exit code 5 indicating No tests were collected). This can for example happen when running test_optixdenoiser in a setting without a GPU, or test_polarizer when no polarization variant has been built.

This PR modifies the logic in conftest.py to mark tests for skipping, using the same (fast) logic as originally added in #1659. On my system, both with and without PR the test suite (LLVM only, -m "not slow") takes around 50s. I do not observe a significant performance issue from having pytest report the skipped tests instead of removing them entirely.

It's a somewhat minor change, but makes it such that tests that do not fail consistently return exit code 0 :) Maybe worth double checking performance on other systems, if there are known challenges (e.g., on certain OS?)

dvicini avatar Nov 17 '25 15:11 dvicini

I don't know about performance implications, but personally I like having the tests marked as 'skipped' (with a clear reason) instead of having them disappear :+1:

merlinND avatar Nov 18 '25 14:11 merlinND

In those few files, how about just adding a dummy test function? I hesitate to change the test discovery mechanism.

wjakob avatar Nov 18 '25 15:11 wjakob

Just a quick question: we can't really know ahead of time which variants the user has. So there always needs to be a way of generating tests at run time. I have not reviewed the PR in detail but don't fully understand how this can be compatible with marking tests as skipped.

wjakob avatar Nov 18 '25 15:11 wjakob

I only modified the pytest_collection_modifyitems function, which is called after the tests have been collected. It appears that pytest allows to just mark tests as skipped after the initial collection (instead of dropping them). The mechanism to decide which tests can and cannot run is exactly identical to before.

dvicini avatar Nov 19 '25 07:11 dvicini