Support inspecting known failing tests in UI
Clear and concise description of the problem
As a developer using Vitest I want to be able to see how many known failing tests there are so that I can keep track of them, rather than them hiding away. I don't want to use skips because I want to know if I inadvertently fix one, so that it doesn't regress.
I'd be willing to PR this sometime when I'm less busy.
Suggested solution
Add it to the ui in the sidebar (maybe on the dashboard?), maybe add indicators to the cli too liked skipped tests?
Alternative
Don't, keep track manually. Use skip.
Additional context
I'm porting test cases over from one project to another and the new projects fails some of them. I've added .fails, which suppresses them for now, but there's not a good way to keep track of them, unlike skipped tests.
Validations
- [X] Follow our Code of Conduct
- [X] Read the Contributing Guidelines.
- [X] Read the docs.
- [X] Check that there isn't already an issue that request the same feature to avoid creating a duplicate.
To clarify, are you suggesting to show the count of it/test.fails https://vitest.dev/api/#test-fails somewhere?
Different from your use case where you intend to fix tests later, people can also use that to assert expected failure, so I'm not sure if we can always include that in the summary. It would be interesting to know if other frameworks do something special about it https://jestjs.io/docs/api#testfailingname-fn-timeout, https://playwright.dev/docs/api/class-test#test-fail.
To clarify, are you suggesting to show the count of
it/test.failshttps://vitest.dev/api/#test-fails somewhere?
Oh, yep. Sorry.
Different from your use case where you intend to fix tests later, people can also use that to assert expected failure, so I'm not sure if we can always include that in the summary. It would be interesting to know if other frameworks do something special about it https://jestjs.io/docs/api#testfailingname-fn-timeout, https://playwright.dev/docs/api/class-test#test-fail.
Hmm. Why would someone mark it as failing instead of adding a not to an expect, etc?
I just read the docs you linked, and it sounds like they're meant for my purpose, even if they're not always used that way:
- "You can use this type of test i.e. when writing code in a BDD way. In that case the tests will not show up as failing until they pass. Then you can just remove the failing modifier to make them pass." (What I'm doing)
- "It can also be a nice way to contribute failing tests to a project, even if you don't know how to fix the bug." (Pretty much the same, would still be good to be able to inspect it)
- "This is useful for documentation purposes to acknowledge that some functionality is broken until it is fixed." (Same as above)
If people are using it invert their expectations, maybe this would be a good way to guide them to writing more robust tests?
If changing this isn't an option, could we add a new modifier instead?
Why would someone mark it as failing instead of adding a
notto an expect, etc?
Yeah, right, I think I get that's the common way. Probably I was confused because Vitest's own test suites tend to use it.fails to test failure itself.
Im also using fails to track a rewrite's compliance to the suite of the original package and find the behavior of fails incomplete for the purpose.
It's still helpful compared to conditionally using expect.not, but I don't see how fails is helpful vs expect.not when it's not a test meant to be fixed.
Fails is slightly more helpful in my case since number of failed snapshots do in fact show up, but it doesn't tell me which exactly.
I think the (unintentional?) behavior of snapshots in failed tests might be a hint that fails should be reported separately like skip.
I think we can print the number of successfully failed in the dashboard in UI and CLI, and maybe have a different icon/symbol?