[Feature] Need warning status in test results along with Pass, Failed & Skipped status
Need warning status in test results along with Pass, Failed & Skipped status.
for example, while listen console errors and any error detected, need to mark the test case as warning instead of fail. may be those errors are ok to be in test environment, but later these issues has to be fixed. and mainly those were not show stopper in those case.
Like how you we have expect.soft() method. if we have expect.warn() method would be much better. (Its just my POV).
Current Behaviour: expect.soft() - If test fails, it will continue to execute current test, instead of start executing next test.
Requested Behaviour: expect.warn() - if test fails, it will continue to execute current test, instead of start executing next test.
@muralidharan92 there's nothing built-in as of today, but the work-around would be to build your own audits. E.g. you can
subscribe to 'console' events on browser context, and check them after the test finishes (e.g. using auto-fixtures).
I'll mark this as P3 for now to collect more feedback.
@muralidharan92 there's nothing built-in as of today, but the work-around would be to build your own audits. E.g. you can subscribe to
'console'events on browser context, and check them after the test finishes (e.g. using auto-fixtures).I'll mark this as P3 for now to collect more feedback.
Thanks for the reply @aslushnikov. This feature is much needed, since builds running in CICD pipeline, build should not fail for these unaffected issue. Those issue can be fixed after build success too.
On your suggestion even if use auto-fixtures while validate and add it to log , still assertion will be failed. (Im just thinking loud)
Keep me posted in if any other details needed.
I'm interested in a 'warn' feature too. The use case for me is that when a tear down step fails I don't want the test to fail but I do want to know that something has gone wrong. Currently I think my choices are expect.soft(), console.warn() or runtime annotations. The first doesn't work because it fails passing tests, and the second and third options aren't always be seen.
Sort of related, but I just want to visually mark a test as a "warning" in my HTML report. I've taken to annotating it like this:
test.info().annotations.push({ type: " Warning", description: JSON.stringify(warning) });
We are currently making a lot of baseline screenshots during our test runs. If new screenshots during a test run differ, we don't want the test to actually fail. So we are using expect.soft() for every screenshot. The problem is, that in the end, the test is still considered as failed and will therefore show as failed in the final report.
Some other test frameworks have the option to use Assert.Inconclusive() as a way to indicate not a pass or fail and I have found that useful in similar scenarios mentioned above. It may offer more meaningful result in the context of tests rather than Warning.