ava
ava copied to clipboard
Randomize test runs
We should kick off tests in a random order. This will flush out any interdependencies per test. However the order should be reproducible to debug such interdependencies. Ruby's minitest has a --seed option which can be set on the CLI to control the order. A random seed is used if no value is provided and it's reported in the output.
Starting tests in a random order within a test file is the easiest place to start. Presumably .serial would run in source order.
We could consider running test files in a random order as well, though they're already somewhat random because each test file runs in its own process, and because each test file runs in its own process it'd be hard to run them in a deterministic order. So probably best not to even try.
Thoughts?
There is a $80.00 open bounty on this issue. Add more on Issuehunt.
- Checkout the Issuehunt explorer to discover more funded issues.
- Need some help from other developers? Add your repositories on Issuehunt to raise funds.
This will flush out any interdependencies per test.
I believe it is already being done, as we start all tests at the same time and running them in parallel.
In a way, order of running/output is random, because it depends on the execution speed of the test.
We kick off the tests in source order though. The asynchronous ones will start to interleave but synchronous tests won't. And synchronous tests will block asynchronous ones that started earlier.
Yes, we discussed this a while ago and agreed it would be very useful, but not a priority for 1.0.0.
https://github.com/bahmutov/rocha is a good inspiration.
"I thought it was a pretty neat idea. I really like that he saves the execution order when tests fail. That way things stay reproduceable until fixed." - @jamestalmage about the above.
One thing that will be difficult about this when it comes to AVA, will be the fact that there's no good way to enforce ordering across multiple processes. That would only cause problems if multiple test files are accessing the same system resource (disk, database, etc). I don't think that is an easy problem to solve, and is probably best just ignored initially. Random ordering per file would provide a lot of value on it's own.
The asynchronous ones will start to interleave but synchronous tests won't.
Agreed. Additionally, most asynchronous tests will currently interleave in a predictable/identical way for each test run. Only async tests with random delays (aka: those that access system resources) will end up interleaving randomly.
This is a neat approach to finding which tests in which order cause a failure: http://make.bettermistak.es/2016/03/05/rspecs-bisect/
We could consider running test files in a random order as well, though they're already somewhat random because each test file runs in its own process
I think we should still shuffle the glob-resolved array of files just to be sure.
@novemberborn sounds like you'd have to save the run order to disk for every run
@issuehunt has funded $80.00 to this issue.
- Submit pull request via IssueHunt to receive this reward.
- Want to contribute? Chip in to this issue via IssueHunt.
- Checkout the IssueHunt Issue Explorer to see more funded issues.
- Need help from developers? Add your repository on IssueHunt to raise funds.
Hey! Is this issue still up for grabs(Issuehunt bouny)? Thanks! @wprater @novemberborn
@mighty-phoenix it is, though this behavior could be achieved using the sortTestFiles() feature in 4.1.0.
I'd still like to see it within AVA but the implementation has to be pretty good, now that it's possible to built this on top of AVA for most scenarios.