vunit
vunit copied to clipboard
Controlling execution order of the testbenches
Is it possible to have a feature to control execution order of the testbenches? Like having short testbenches executed first and then slow ones last or something similar.
Sure it is possible but there is no such feature yet.
This is part of a broader approach of making test execution more efficient:
- Run short tests first to minimize time to bug detection. Without any proof I believe this is a good strategy in most cases.
- Perform load balancing between threads to make the total test time shorter
I could also look into to implement something if there is no one else plan to do this. Any suggestion how this should be implemented as a feature. How currently tests are ordered and how tests are distributed to different threads?
In the past I was fixing a bug with Ruby on Rails parallel testing for Rails 6.0. So some knowledge from there could be potential here too :)
I believe tests are run in the order they appear in the test bench and in the order the files are added. Thus you have some control already today.
When run in parallel tests are just sent to worker threads in the same order as they would be processed in a single thread.
If more control of execution order is needed there is an attribute system that could be used to set a priority.
I'd like to give it a try to implement this feature. @kraigher what you mean by attribute system could be used? Can you give me an example about this? I haven't used this library that much yet. And how VUnit can determine how long each test case takes so better load balancing can be implemented.
I think in Ruby parallel-testing gem when test are executed for the first, time to execute it is measured. So on the next testing session this information can be used to evenly distribute test cases.
@kazooiebombchu In the documentation for attributes you'll find this example:
my_test.set_attribute(".requirement-117", None)
where a custom requirement attribute is set for a test case. Note that the attribute starts with . since we reserved names not starting with . for builtin attributes. This means that we could add a new builtin attribute priority. Also note the None argument. We don't use that today but it was added to prepare for attributes with values, i.e we could have a priority number dictating the order of execution. Adding such support would be the first step. Once you have that you can create whatever algorithm you want to set that priority number, For example, you can take the XML report from a previous run, extract the test execution time from that and use it to create a balanced order of execution.
@kazooiebombchu Regarding VUnit knowing the execution time of the tests there is no mechanism for that today. If the use case is load balancing when re-running regressions on a test server the test execution time could be read from a previous test result XML file that is given via a command line argument.
What do you guys think of adding the ability to change test execution from: (pseudocode)
for test in tests:
for config in test:
run(test, config)
to:
for config in configs:
for test in tests:
if config in test:
run(test, config)
So basically just swapping the order so that you can check that all tests pass for some base config, and then run the more complicated configurations afterwards.
Could be an option set from the Python interface and potentially a command line option, though that seems excessive IMO.
This seems a lot easier to implement than smart load balancing.