Tests ordering
This is a features request. It is of low priority but will be nice to have. Maybe the feature already exists?
I have tests in different source (.cpp) files. Each cpp file has a group of related tests. Sometimes, one group of tests relies on objects that are also tested but in another group. I would like to order them such that the group of tests that relies on some objects is tested AFTER the group of tests that test the objects in the first place.
That may have been confusing so here is an example:
GROUP1 - tests for sorting on containers (containerSortTests.cpp) GROUP2 - tests for containers to make sure they work properly (containerTests.cpp)
I would like Group2 tests to be tested before Group1. If Group2 tests fail, then Group1 will most likely fail regardless. The 'ordering' of the failure will tell me to fix the containers first.
If all the tests belong together, but some are hierarchically higher-level and should be executed first (or last - or both) then you can do this already by using nested SECTIONS. However, from my reading of what you're saying you want them to be kept separately - e.g. application level and library tests. It sounds like what you want is to be able to specify dependency test names, such that the dependencies will be tested first (e.g. the library code), and only if that passes will be the application level code that relies on it tested.
Where I have done similar I've put them in different test executables. Then I can easily script them so it only executes the dependant tests if the dependencies pass (catch returns a non-zero return code - actually the number of failures - if the tests fail).
That is correct, I would like to be able to specify dependencies. I can put them in different executables. The problem is that it requires a lot of maintainance which will increase with the number of executables I have. For example, if another pre-processor is added for the framework, or more includes etc. all the test projects will have to be updated. Add to that the multitude of different configurations (debug, release, debug_dll, release_dll, release_withdebug etc.) it becomes a maintanance nightmare (ideally, Visual Studio should be helping me with this, but sadly that is not the case).
So in short, it would be great if I can 'group' all the TEST_CASEs and then specify their dependency (or if I don't specify one, they can be run in any order). Also, is CATCH able to run the tests in parallel? If not, this can be a way of preparing for that as well. For example, I as a user, can guarantee that two tests not dependent on each other can run independently and thus in parallel. This is just a thought, it is obviously more complicated and may have more issues.
I understand that full test dependency ordering is maybe a lot of work. But it would be really useful to have some way to specify order in the code (eg. to test leaf modules before modules that use them).
To some degree test ordering can be handled by the --test command line option. Assuming my class tests begin /class/class-name to force A, B and C to be tested in that order I can do:
testexe --test /class/A/* /class/B/* /class/C/*
What would be really useful is if I could set the default --test list in the code. Then I could force the standard tests to run in my desired (dependency-based) order.
Something like this, with a null terminated array of test spec strings;
// tests run in left-to-right order, force leave classes to be tested first: CATCH_DEFAULT_TESTS( { "/class/A/", "/class/B/", "/class/C/*" } )
Does that sound like it might be an acceptable addition? If so I could look into implementing it.
It doesn't seem like it would be too hard to register a function that gets called to initialize the default filters, or to just have a default filters global.
I too desire to have this functionality. I think it's as simple as allowing a test definition to contain a third field of pre-requisites.
Instead of... TEST_CASE("sorting", "test basic sort routines"); TEST_CASE("container", "Tests the base containers"); TEST_CASE("container_sorting", "Tests the containers sorting");
we could instead rewrite that as TEST_CASE("sorting", "test basic sort routines"); TEST_CASE("container", "Tests the base containers"); TEST_CASE("container_sorting", "Tests the containers sorting", "{container,sorting}");
Where the added third parameter is simply the name of test cases that must pass without failure before this testcase will run.... Thus container_sorting tests would only be ran if the test label "container" and "sorting" had no failures.
I too would be interested in implementing this feature if it's not a priority.
I was also looking for this functionality. Since it's not implemented I though I would share a work-around I found using a custom main function.
Assuming the following tests (I took the example of acidtonic) TEST_CASE("sorting", "test basic sort routines"); TEST_CASE("container", "Tests the base containers"); TEST_CASE("container_sorting", "Tests the containers sorting");
The main could be written as follow
#define CATCH_CONFIG_RUNNER
#include <Catch2/catch.hpp>
int main(int argc, char** argv)
{
Catch::Session s;
int init_code = s.applyCommandLine(argc, argv);
if (init_code != 0)
{
return init_code;
}
auto config_data = s.configData();
int num_failed = 0;
for (std::string_view tag : { "[container]", "[sorting]" })
{
config_data.testsOrTags = { std::string(tag) };
s.useConfigData(config_data);
num_failed += s.run();
}
if (!num_failed)
{
for (std::string_view tag : { "[container_sorting]" })
{
config_data.testsOrTags = { std::string(tag) };
s.useConfigData(config_data);
num_failed += s.run();
}
}
return num_failed;
}
I was just wondering whether the code supported such a feature...
My current solution is to use a mix of the above:
-
I create groups using tags in each set of test cases (as shown by KazukiCP)
// test cases are assigned different tags as follow TEST_CASE("some_name", "[container]") -
As a result, I can create a single binary to run all the tests (as requested by samaursa)
-
I run a specific set of tests using the tags through a bash script, that bash script defines the order (as mentioned by philsquared)
// running the tests #!/bin/bash -e unittest [container] & unittest [sorting] & wait unittest [container_sorting]The
-eoption tells bash to exit on errors.The
&runs those tests in the background and thus I can run twounittestinstances in parallel.The
waitis a synchronization point which blocks until the previousunittestinstances are done.Obviously, the order within a group is undefined. As mentioned above, defining sections within a test is a way to run your tests in a given order. However, that could mean a gigantic test case and that's probably not a good idea (really hard to maintain).