playwright
playwright copied to clipboard
[Feature] Flaky Tests - retry all tests at end of the test run
It would be ideal to have a configuration option where flaky tests are retried at the end of the run once all other tests are finished. I see it was discussed here, but that one was closed and not sure the concept was since revisited.
This becomes a lot more relevant when using worker-scoped fixtures, since the thread gets torn down and all the auto: true worker-scoped fixtures are retriggered for the new thread. Since many of these startup actions are expensive both in time and resource, it makes more sense for the failed tests to share new threads at the end of the run.
My personal vision of a solution would be a config setting in playwright.config file eg retry-strategy: 'tailing'. Doing it this way would also open the opportunity for other configurations such as not starting a new thread on failure, or max number of hard fails, conditional retries etc.
You can use postinstall to replace one line in the file:
You need to replace this._queue.unshift(result.newJob) with this._queue.push(result.newJob)
Source file: https://github.com/microsoft/playwright/blob/release-1.40/packages/playwright/src/runner/dispatcher.ts#L126
Compiled file (in your project): /node_modules/@playwright/test/lib/runner/dispatcher.js
Можно с помощью postinstall заменить в файле одну строчку:
Нужно заменить this._queue.unshift(result.newJob) на this._queue.push(result.newJob)
Исходный файл: https://github.com/microsoft/playwright/blob/release-1.40/packages/playwright/src/runner/dispatcher.ts#L126
Скомпилированный файл (в вашем проекте): /node_modules/@playwright/test/lib/runner/dispatcher.js
+1
This would be pretty useful for our team as well
+1. It's useful for the case when there is an env issue and all subsequent retries will fail but if they are postponed for a bit later, chances are the test will succeed. Would be good to dynamically manage this.
+1 I would like this to be implemented :)
+1 This would be really useful when resetting the test data only once in the global setup and the flaky test now does not have a clean state it had on the initial run
+1 one for this
+1 I'd like to see this too. This is also useful for when some specific tests may fail when sharded and the system is under heavy load, and then may have a better chance of succeeding when the number of shards tail off.
I know this all speaks to generally poor systems under test, but that's kind of the reality that we have to test with much of the time.
+1 for this.
+1 I'd like to see this too.
+1 I'd like to see this too.
+1 I'd like to see this too.
+1 I'd like to see this too.
+1 Would be really useful as a config param.
Any plans for this? Would have been great to have this
+1 would like this
+1
+1
We have discussed this feature request with a team. It looks like an option to run retries one by one at the end, without any parallelism, might be beneficial to reduce flakiness. Note that this will not improve the total run time, but rather slow it down in the hope of making the test suite more stable.
Please let us know whether you would use such a feature. If you would like to see something else, please explain your usecase in details. Thank you!
i would be very interested in this pattern. We sometimes experience flaky test due to the backend overloading / some things locking up processes. The proposed pattern could adress those
Couldn't there be option to either run the failing test in the end with or without parallelism?
@marcusNumminen If you would like the feature to run retries in the end, but in parallel, please explain your usecase in details. We'd like to understand the problem before shipping a solution.
@dgozman I assume your proposed solution involves tearing down the threads at the end and creating a new pristine single thread to run the tests one by one?
@dgozman I assume your proposed solution involves tearing down the threads at the end and creating a new pristine single thread to run the tests one by one?
Yes, we'll create a new worker and start running retries there. If some of them fail again, a new worker will be created as usual.
Hi! Sorry for the delay. In some cases we are running the test right after a deploy and when the tests start not all the deployed stuff is fully ready and some test fails due to this. But a solution with one worker running the retries in the end of the execution will of solve the problem for us but in this situation running the test in parallel would be faster :)
+1