nunit-console icon indicating copy to clipboard operation
nunit-console copied to clipboard

Suppress "skipped tests" output in console-runner for Explicit or Ignore tests

Open AGBrown opened this issue 6 years ago • 14 comments

NUnit console: "nunit3-console.exe" v3.8.0

We use console-runner in a test script as part of git processes like git rebase --exec and git bisect. Often we'll have tests marked Explicit (they are slow tests, we don't need them for the git process, but we'll run them once we finished the process) or Ignore (we try and avoid this, but it happens if there is an issue open to fix it, but we can't get to it right now).

The issue is that the Tests Not Run output swamps the console (or the output log which is just the console output written to a file) when the script is run. All we really want to see on these test outputs is Green/Red (and if red then what went wrong). In this case the tests that were marked Explicit or Ignore are not important, and actually make it harder to go through the output log. The test summary also gives the total Skipped, Explicit and Ignore counts which is enough to note that there are tests that are not being run.

Things I tried:

  • Use --where=EXPRESSION to exclude tests marked Explicit or Ignore (doesn't work)
  • Use --noresult (doesn't work)
  • Use --trace=Error (doesn't work)

The only other workaround I could think of is to mark every Explicit or Ignore test also with a Category that can be excluded using --where, but we never remember to do this when it matters, and the ongoing argument has been that if we have already added one attribute that should be enough to remember to do.

TL;DR for the console runner is there a way to, or can we have a parameter that would, suppress the "Tests Not Run" output.


Sample test script:

(expand for full details).

#!/bin/sh
# exit on error
set -e

# ... do git stuff here to skip deliberately red commits in a red-green two-step ...

# ... do build scripts here ...

# Run unit tests
"C:\Program Files\NUnit\nunit3-console.exe" \
    //noresult \
    //stoponerror \
    ./src/A.Test.Project/A.Test.Project.csproj

status=$?
if [ $status -ne 0 ]
    then
        echo "Test failure, status $status."
        exit $status
fi

echo "Tests passed."

Repro steps:

  1. Have a test dll, mark a test as Explicit and another as Ignore
  2. Run the console runner using the shell script
  3. The output includes a "Tests not run" list with the Explicit and Ignore test listed.

AGBrown avatar Jun 20 '18 22:06 AGBrown

I would also find this useful at my day job.

jnm2 avatar Jun 20 '18 22:06 jnm2

Suppressing Explicit makes some sense because it amounts to not selecting the test. OTOH Ignore results in a warning result so not showing what caused the warning seems strange.

In V2 Explicit tests were neither listed nor counted.

Would the output seem less cluttered if we went back to the old order of reports?

CharliePoole avatar Jun 20 '18 23:06 CharliePoole

I would also prefer if explicit and ignored tests would not clutter the output. Perhaps the behavior change can be hidden behind a consule-runner parameter?

siprbaum avatar Jun 21 '18 06:06 siprbaum

For my part, it's only the explicit test listing that is burdensome. I like the idea of treating ignored items as a TODO list. It seems okay to keep considering them to be anomalous (in line with being warnings) until you fix them, if that's not too opinionated of us.

If we do decide to just hide explicit tests, it seems like we could just start doing that without adding a setting. Or at least make it the default to not show them. What do you think?

@CharliePoole

Would the output seem less cluttered if we went back to the old order of reports?

I'm not sure what this is.

jnm2 avatar Jun 21 '18 12:06 jnm2

@jnm2 As I mentioned on the other issue, the old order was Summary first.

What if we parameterized all the reports in some way using a flags enum? You could specify just summary and errors, for example, and have a predefined value All.

CharliePoole avatar Jun 21 '18 21:06 CharliePoole

Flags sound pretty cool. Would there be a CLI switch per flag value?

jnm2 avatar Jun 21 '18 22:06 jnm2

We'd have to design something. Rather than a switch per sub-report, I could see having sub-switches like --report=Summary+Error,

CharliePoole avatar Jun 22 '18 03:06 CharliePoole

I like the idea of treating ignored items as a TODO list. It seems okay to keep considering them to be anomalous (in line with being warnings) until you fix them, if that's not too opinionated of us.

The number of Ignore tests we have is vastly smaller than the Explicit tests. While I would therefore prefer to be able to suppress them both, suppressing Explicit only would still be a big win.

AGBrown avatar Jun 28 '18 09:06 AGBrown

@AGBrown If you have ignored tests that aren't inherently a TODO, have you considered using Assert.Inconclusive (more of a N/A) instead?

jnm2 avatar Jun 28 '18 23:06 jnm2

In my coaching hat, I always treat a lot of ignored tests as a team issue. Best way I know to fix it is to hang up a list of all the ignored tests from the nightly build on the wall. Most teams know without being told who is responsible for each one and they magically reduce over a few weeks.

CharliePoole avatar Jun 29 '18 01:06 CharliePoole

If you have ignored tests that aren't inherently a TODO, have you considered using Assert.Inconclusive (more of a N/A) instead?

and

I always treat a lot of ignored tests as a team issue. Best way I know to fix it is to hang up a list of all the ignored tests from the nightly build on the wall.

I agree with you both. Our Ignore tests are treated as todo items and there are very few of them; often none of them. The use case for suppressing all "Tests Not Run" outputs (Ignored and Explicit) would be a developer in the middle of "I'm diagnosing an issue" (git bisect) or "I'm completing a feature" (git rebase --exec); at this point Explicit and Ignored lists are not relevant to the task in hand and distract the developer.

  1. In the middle of this work (inspecting the results of the exec step for instance) the developer has enough to consider and wants to focus on the immediate concern - do my fast tests pass - and not have to constantly context shift between what is relevant to this feature rebase, and what is the bigger picture after I finish this feature.
  2. At the end of the rebase work they then want to know what was Explicit so they can extend coverage to validate the rebased feature (for instance) by running the slow tests
  3. and after the feature/bugfix are completed and merged then they want to know what was Ignored so they can pick the next issue to work on.

TL;DR the "Tests Not Run" list, in certain use cases, slows the developer down by making them do more context shifts (and scanning/reading/scrolling) before they can complete the current task and then go back to the point where they want/need to pay attention to "Tests Not Run".

AGBrown avatar Jul 04 '18 10:07 AGBrown

I've run into this myself in working on NUnit. Usually I avoid the problem by running tests in a particular fixture or namespace.

@AGBrown Let's pretend there is an option that specifies which output you want to see for each of these use cases. What choices would it need to include in your view?

CharliePoole avatar Jul 04 '18 16:07 CharliePoole

@CharliePoole

Usually I avoid the problem by running tests in a particular fixture or namespace.

This is a way around it, I think I was even doing that recently, but I still had 60 Explicit tests spamming me on every exec in that one case.

I think this would be my logic (I am open to persuasion though though):

  1. I'm doing rebase --exec or git bisect.
    • fast results with "good enough" coverage are important.
    • I might filter to some "scope" (fixture/namespace/category)
    • I might be rebasing 20 commits - I just want to see 20 red/green lines on the console (no console scrolling, no extra scanning/reading at this point)
    • I can always include particular Explicit tests if they are relevant to this particular feature/diagnosis
    • Therefore: I don't want to see any Explicit or Ignore tests listed in "Tests not Run"
  2. I've done the rebase --exec or git bisect and want to complete coverage testing to double check everything.
    • This is a one off, so I now include my Explicit tests and swallow the slower test run.
    • I might still be filtered to some "scope"
    • I'm not yet interested in ToDo items represented by Ignore tests
    • Therefore: I want to see any skipped Explicit tests listed in "Tests not Run", but not Ignore tests
  3. I've merged. What's next on my list?
    • Run the entire test suite
    • Look for Explicits and Ignores that were missed
    • Therefore: I want to be told about all "Tests not Run", Explicit, Ignore (or other?)

AGBrown avatar Jul 06 '18 12:07 AGBrown

I'm really looking forward to this. It'll make NUnit's own CI logs so much easier to read!

jnm2 avatar Jul 07 '18 02:07 jnm2