feat: introduce the new reporter API
Description
THIS IS A DRAFT.
interface Reporter {
/**
* This method is called when the error happens outside of the test.
*
* Possible errors:
* - unhandled error thrown during the test run
* - error thrown during collection phase when importing a module (global throw)
* - error thrown during collection phase when calling `describe` callback
* - error thrown inside `beforeAll`/`afterAll` hooks since it can't be attributed to a specific test
*/
onError(error: SerialisedError, suite?: TestSuite): void
/**
* This method is called when all tests inside a single file were collected.
* The `file` object contains `children()` method with all suites and tests.
* Tasks won't have `result` property yet.
*
* **Note:** This method can be called in parallel if multiple files are collected at the same time (`--single-worker` is used)
*/
onFileCollected(file: TestModule): void
/**
* This method is called when the runner is ready to start running a test. The `result` will always be `undefined`.
* **Note:** If test is marked as `todo` or `skip`, this method will still be called.
*/
onTestPrepare(test: TestCase): void
/**
* This method is called when the test finished running. It will always have a `result` property.
*/
onTestFinished(test: TestCase): void
/**
* This method is called only after the test has finished running and the `result.type` is `failed`.
*/
onTestFailed(test: TestCase): void
/**
* This method is called when user logs something to the console.
* The order of logs is not guaranteed.
*/
onConsoleLog(type: 'stderr' | 'stdout', log: UserConsoleLog): void
/**
* This is called when all tests have finished running.
*/
onTestRunFinished(files: TestModule[], errors: SerialisedError[], reason: 'passed' | 'failed' | 'timedout' | 'interrupted'): void
/**
* Coverage is performed for all files that were executed.
* This is called only if `--coverage` flag is used.
*/
onCoverage(summary: any): void
}
The current lifecycle ("run tests" has its own lifecycle):
Please don't delete this checklist! Before submitting the PR, please make sure you do the following:
- [ ] It's really useful if your PR references an issue where it is discussed ahead of time. If the feature is substantial or introduces breaking changes without a discussion, PR might be closed.
- [ ] Ideally, include a test that fails without this PR but passes with it.
- [ ] Please, don't make changes to
pnpm-lock.yamlunless you introduce a new test example.
Tests
- [ ] Run the tests with
pnpm test:ci.
Documentation
- [ ] If you introduce new functionality, document it. You can run documentation with
pnpm run docscommand.
Changesets
- [ ] Changes in changelog are generated from PR name. Please, make sure that it explains your changes in an understandable manner. Please, prefix changeset messages with
feat:,fix:,perf:,docs:, orchore:.
Deploy Preview for vitest-dev ready!
| Name | Link |
|---|---|
| Latest commit | 33c85446a17fcfcc67014326ea630001a99e09a1 |
| Latest deploy log | https://app.netlify.com/sites/vitest-dev/deploys/67866f3b73fbb10008eeec61 |
| Deploy Preview | https://deploy-preview-7069--vitest-dev.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify site configuration.
Proposition for the runTests phase here:
-
onTestFileQueued(file: TestModule)- collection of module is about to start -
onTestFileStart(file: TestModule)- module is about to run-
onHookStart(...)-before*hooks is called -
onTestStart(test: TestCase)- a singletest/itis about to start -
onTestFinish/onTestFail -
onHookEnd(...)-after*hook is called
-
-
onTestFileFinish(file: TestModule)- all tests in the module finished running
Important part here is that this order must be guaranteed. Currently onTaskUpdate can skip certain phases if tests run fast enough. We need to add some state in module that converts onTaskUpdate into these hooks.
onHookStart(...)-before*hooks is called
if we report suite hooks (beforeAll/beforeEach), should we report onSuiteStart?
onTestFinish/onTestFail
I think now it's better to have a single hook (onTestFinish) and check the result there - I updated the TestCase to always return a result() (now return pending if no result is set)
onTestFile*
Should be onTestModule*. The idea with the name is that I expect us to implement support for virtual modules.
@AriPerkkio should we update the onTaskUpdate payload to include the type of action to make it easier to parse? (test-started, file-finished, etc)
if we report suite hooks (
beforeAll/beforeEach), should we reportonSuiteStart?
Do we need that anywhere? I don't think onTaskUpdate reports this atm. Currently slowly running beforeAll etc. hooks are showed by reporters so we need to support those.
Should be
onTestModule*. The idea with the name is that I expect us to implement support for virtual modules.
Yup, let's use onTestModule* and onTestCase* naming convention in the hooks. It matches the TS interfaces too.
should we update the
onTaskUpdatepayload to include the type of action to make it easier to parse? (test-started,file-finished, etc)
I'm not yet sure. The entity.type and entity.state() should be enough. :thinking:
I'm not yet sure. The
entity.typeandentity.state()should be enough. 🤔
entity is a user facing API. I am asking if we should send the type to ourselves when we trigger onTaskUpdate in the test runtime to parse it more easily on the server. Instead of sending updateTask(task), we do something like updateTask('test-start', task) - it's an internal improvement
Do we need that anywhere? I don't think
onTaskUpdatereports this atm. Currently slowly runningbeforeAlletc. hooks are showed by reporters so we need to support those.
If it doesn't report it, we can start reporting it. I just don't understand why some hooks are reported and some not - I see it from the practical standpoint, but philosophically - why? What makes them more special? beforeAll/afterAll can also take a long time to execute
beforeAll/afterAll are showed by reporters, let's keep supporting them like before. I feel like I'm missing something here. 🤔
- https://github.com/vitest-dev/vitest/pull/6893#discussion_r1839800573.
https://github.com/vitest-dev/vitest/blob/8764f5c15760c596f408c30ae1849b3284555c81/packages/vitest/src/node/reporters/task-parser.ts#L66-L75
beforeAll/afterAllare showed by reporters, let's keep supporting them like before. I feel like I'm missing something here. 🤔https://github.com/vitest-dev/vitest/blob/8764f5c15760c596f408c30ae1849b3284555c81/packages/vitest/src/node/reporters/task-parser.ts#L66-L75
beforeEach hook is tied to a test case, it's obvious what reporter hooks will be called and in what order. I feel like with beforeAll it's not so obvious because there is no corresponding onTestSuite* hook
I'd like to remove the public state in the future (we can keep errors and stuff there internally) and replace it with a testRun. Maybe we should make the testRun public, but keep the methods internal?
some questions I have after working on the lifecycle diagram (need to polish it and add to the docs when we are finished) - right now it just reports when we call onTaskUpdate, but doesn't mention new reporters (at least, it's easier to see where things are wrong)
- How can the test have a
failedstate even before we run it? (there is an if for it and and eventtest-failed-early). A similar event can happen in a suite when it has an error during collection - Should we add more
hookevents to remove parsing ofonTaskUpdate's hooks altogether? Should we rely more on events now overall? There are some events that are not so easy to distinguish from one another (for example, the hook update also reports the test state - if the test was retried, this is one extra call) -
onTestFinishedis not called if the test was skipped with a dynamiccontext.skip- should it be called still? 🤔 - Should we report
onTestSuiteStart/onTestSuiteEnd? We callonTaskUpdatein this case already (notice that if suite was skipped, it will still report a start update, but won't reportend, unliketestCasethat doesn't report anything if it was skipped) - How do we report test retrying? Right now we report the start, but retrying is updated after
afterEachis called makingonTaskUpdateharder to parse (btw, both retrying and repeating is reported in the same way as a single update)
Updated hooks proposal:
The current proposal looks like this:
Events with dashed border are optional