Running in parallel suppresses output from failed examples
I have a module whose tests include some testable examples. The examples run, and the failure is detected, but I've found that if I use ginkgo run -p to run tests in parallel, then the output of failed examples never gets printed.
Running sequentially:
$ ginkgo run .
Running Suite: Examples - /Users/Rob.Kennedy/src/ginkgo-example-bug
===================================================================
Random Seed: 1761932072
Will run 1 of 1 specs
•
Ran 1 of 1 Specs in 0.000 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
--- FAIL: ExampleFoo (0.00s)
got:
foo
want:
bar
FAIL
Ginkgo ran 1 suite in 706.842041ms
Test Suite Failed
exit status 1
Running parallel:
$ ginkgo run -p .
Running Suite: Examples - /Users/Rob.Kennedy/src/ginkgo-example-bug
===================================================================
Random Seed: 1761932054
Will run 1 of 1 specs
Running in parallel across 7 processes
•
Ran 1 of 1 Specs in 0.004 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
Ginkgo ran 1 suite in 2.404204125s
Test Suite Failed
exit status 1
We can see above that we still get "Test Suite failed," but we're not shown what failed.
hey there - sorry to be the bearer of bad news but mixing standard go test tests, examples, and benchmarks into a ginkgo suite doesn't work really well when running in parallel.
To guard against test pollution ginkgo runs specs in parallel by spinning up parallel test processes, each of which will invoke your test suite. The ginkgo tests within your suite will be scheduled such that they are distributed in parallel (i.e. a given spec will just run once on some parallel process). But the go test tests will run on every process. And their output won't be captured and recapitulated by ginkgo since they are running outside of its control.
Yeah, I noticed the "examples run in every process" effect in an earlier iteration of reproducing the issue, where I'd neglected to call RunSpecs at all. Then I just got five copies of the failing example output! (Oops.)
Even if it's not feasible to fix this, can the experience be improved any? For example, if Ginkgo were able to detect that there are non-Ginkgo tests, benchmarks, or examples in play, then maybe after a failure it could offer a reminder that -p might be interfering with them?