libtest-mimic icon indicating copy to clipboard operation
libtest-mimic copied to clipboard

Runtime ignoring

Open epage opened this issue 3 years ago • 13 comments

In 0.5, Outcome was made private and callers lost the ability to mark a test as ignored/skipped at runtime. This is very useful for test runners built on top that support runtime skipping to communicate what happened to the user.

epage avatar Sep 01 '22 12:09 epage

Could you provide an example of using that in 0.4? If you have an actual project doing that, you can just link it here. A minimal example is fine, too.

LukasKalbertodt avatar Sep 01 '22 13:09 LukasKalbertodt

Having thought it through, my more immediate needs can be worked around

  • snapboxs wrapper which takes an action environment variable can instead set ignore upfront
  • If I were to switch trycmd to using libtest-mimic, I could pre-parse everything and track which cases are ignored.

I do know of other places where runtime ignoring is needed by a test case. For example, cargo's tests have nightly and git dependencies and are skipped otherwise. That logic all lives in test function and libtest just sees "pass". If a case like this were moved to libtest-mimic, it would have the same deficiency.

epage avatar Sep 01 '22 14:09 epage

I'm still slightly confused by the "runtime" part of it because the list of Trials is constructed at runtime too. So you can do whatever check you want to perform while constructing the list of tests, right?

LukasKalbertodt avatar Sep 04 '22 13:09 LukasKalbertodt

Sometimes you can bake conditionals into the framework around a test. Sometimes you can't and want to allow the test the flexibility when it can be skipped.

For example, in my python days I used pytest to test hardware. To know which hardware to test, I injected some test fixtures using command line variables. The tests could then decide if the hardware had the needed capabilities and skip themselves if not.

epage avatar Sep 15 '22 14:09 epage

I would also like to be able to have tests decide at runtime whether to be ignored. In the environment I'm working in, tests are built (along with the rest of the system) on a build server, and then run on various different devices. Some tests require particular hardware support, and so should be skipped on devices which don't have that hardware. I'd like to make this check as part of the test rather than the runner, so the test can decide partway through that it should be ignored, rather than pass or fail. This seems to be a fairly common feature in test frameworks for other languages, such as Java and C++.

qwandor avatar Nov 29 '22 15:11 qwandor

Just chiming in to say that I’d also love to be able to return Ignored dynamically from a test function. Maybe tests could return Result<Success, Failure>, where Success is defined as:

enum Success {
    Passed,
    Ignored,
}

brendanzab avatar Jan 11 '23 00:01 brendanzab

We've switched to this library too and this is very much a feature we'd love to have as well, it takes more time calculating upfront if a test will be ignored which slows down creation of the entire runner, vs making every test and letting it figure out if it can run or not.

Dinnerbone avatar Jan 27 '23 13:01 Dinnerbone

I think I have a better understanding of this feature request now. I will look into this! (No promises as to when though, sorry!).

LukasKalbertodt avatar Jan 27 '23 13:01 LukasKalbertodt

Piling on to this request. If there is a branch somewhere, maybe I can look at finishing it?

bwidawsk avatar Mar 12 '24 21:03 bwidawsk