trycmd
trycmd copied to clipboard
Say "no tests" instead of "ignored" when file is run with no tests
Currently, if a file has no tests, it says Testing foo ... ignored, but it should instead say Testing README.md ... no tests. Or maybe it should fail? This is similar to #105 in that there's a general question around what should happen to "invalid" tests.
Allowing files without tests is very intentional. For example, when running trycmd across clap examples, the README doesn't have any code blocks which is fine. Unlike #105 where there were workarounds, the only workaround here is to explicitly skip these files which then has to be kept up to date without seeming benefit.
At the moment, I lean towards the more consistent output of "ignored" over changing it to "no tests". Besides my preference, we'd have to consider the value worth making the change and ensuring all the right information is in the right places.
For example, when running trycmd across clap examples, the README doesn't have any code blocks which is fine.
That makes sense to me.
I lean towards the more consistent output of "ignored" over changing it to "no tests".
This is good if you're expecting the file to be ignored, but if you don't know why it's being ignored that's not super helpful. Maybe the output should actually look more like cargo? Here's what they do:
Running tests/api.rs (target/debug/deps/api-208fdccdd2171b92)
running 0 tests
test result: ok. 0 passed; 0 failed; 0 ignored; 0 measured; 3 filtered out; finished in 0.00s
Running tests/generator.rs (target/debug/deps/generator-a9e45428f33384fb)
running 1 test
test foo ... ok
test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 284 filtered out; finished in 0.18s
Specifically, they say how many tests are running. So instead of ignored, you could say 0 tests run. And when the command doesn't exist that can go into the ignored tests buckets.
If we were providing a full test harness, I'd be more open to that direction. As we are a test function within an existing harness, we'll really muddle the output if we mix things like that.
If/when we switch to being a test harness*, I also would prefer to not summarize results per file as that would get pretty noisy.
* What we really need is not for every lib making its own test harness but a proper implementation of pytest for Rust which would allow us to plug everything into the test harness.
Hmmm, yeah those are good points.
At the moment, I lean towards the more consistent output of "ignored" over changing it to "no tests".
Could you then share an example of why the output being consistent is important? If that's really the preference then this issue is kind of moot and may as well be closed. My main argument in favor of not keeping things consistent is so you can specify why the test was ignored.
PS: a pluggable test harness would be amazing! That's a very cool future direction I wasn't aware of.
I have not fully articulated my thoughts on it and I do not have time to articulate my thoughts on it. Of any of the potential usability issues, this is one of the lowest on my list across my projects.
:shrug: Sounds good. If this issue sits around long enough maybe the test harness stuff will become a thing and this can be closed through that.
I'm closing this within the scope of trycmd as it is today and instead say this should be handled as part of my custom test harness project (TBD).