node
node copied to clipboard
test runner to include the name of the file being run
What is the problem this feature will solve?
Let's consider the following output:
β logs errors during startup (121.652917ms)
β errors when starting an already started application (70.462041ms)
β errors when stopping an already stopped application (0.26525ms)
β does not restart while restarting (84.201625ms)
β restarts on SIGUSR2 (77.047292ms)
β stops on signals other than SIGUSR2 (38.43975ms)
β stops on uncaught exceptions (37.9015ms)
βΆ supports configuration overrides
β throws on non-string config paths (30.867208ms)
β ignores invalid config paths (36.336958ms)
β sets valid config paths (46.613208ms)
βΆ supports configuration overrides (114.128875ms)
β /Users/matteo/Repositories/platformatic/packages/runtime/test/cli/helper.mjs (98.878167ms)
β autostart (826.68275ms)
β start command (855.799541ms)
β handles startup errors (10102.117417ms)
Error: Promise resolution is still pending but the event loop has already resolved
at process.emit (node:events:513:28)
Unfortunately it's impossible to know at first glance where handles startup error
is defined.
This is critical information for the user.
What is the feature you are proposing to solve the problem?
Whenever the test runner is running more than one file, we should print out the name of the file being run, either in the full path or relative to cwd, something like:
βΆ path/to/my/test.js
β throws on non-string config paths (30.867208ms)
β ignores invalid config paths (36.336958ms)
β sets valid config paths (46.613208ms)
What alternatives have you considered?
No response
cc @MoLow @cjihrig
that used to be the case, and it changed to avoid adding an indentation level when running with--test
, and to avoid a big difference in the output when running with and without --test
I assume when you run the file directly without --test
the output also doesn't include the name, so I would suggest adding the file name to the error message instead of adding an indentation level.
(maybe we can even add it only if the stack trace doesn't include it)
Adding it to the printed error would help a lot. Right now is not intuitive.
Aside from that, I think it's still confusing for me to not see where the tests are located.
Here is another option that keeps the indentation level:
>>> path/to/my/test.js
β throws on non-string config paths (30.867208ms)
β ignores invalid config paths (36.336958ms)
β sets valid config paths (46.613208ms)
Ultimately when running large test suites I care more about the file being run vs the individual test inside that file. Maybe this can be a different reporter?
+1 to printing the filename. I also think it would be useful (although a different feature request) when printing the failures to print the full path of the test (for example top level test > subtest > another subtest > the actual test that failed
).
If no one is currently working on it, I'll be happy to try... if it makes sense to implement this
I think we'll need to create a new event (e.g. test:file
) to be emitted by runTestFile()
in runner.js
?
Technically, we can reuse test:start
event with special nesting value (e.g. -1
) but it seems like a hack IMO...
I think we'll need to create a new event
Could we add the filename to existing events instead of introducing a new one?
All existing events have a file
property.
This issue is about the spec
reporter and how it uses that emitted property.
I am ok with adding it but would really prefer something that bill behave the same when not using --test
All existing events have a file property.
Yup...
Could we add the filename to existing events instead of introducing a new one?
Initially I was thinking of printing the filename when test:start
is being emitted for the first time.
But it seems that there's no clear separation indicator between 2 files.
I received something like this (with enqueue
and dequeue
omitted):
<File 1>
test:start
test:pass
test:start
test:fail
<File 2>
test:start
test:pass
test:plan
test:diagnostic
...
Edit: Wait... I have an assumption that we are printing the filename before each file run (not per test run) :) Is it a bad assumption?
One more datapoint: it's almost impossible to know where a test failed with the current output on a large test suite:
β failing tests:
β handles startup errors (1296.021709ms)
'Promise resolution is still pending but the event loop has already resolved'
β exits on error
'Promise resolution is still pending but the event loop has already resolved'
β does not start if node inspector flags are provided
'Promise resolution is still pending but the event loop has already resolved'
β starts the inspector
'Promise resolution is still pending but the event loop has already resolved'
βELIFECYCLEβ Test failed. See above for more details.