cargo-all-features
cargo-all-features copied to clipboard
Add flags to print dry run of commands to support concurrent runs
Could there be a way to just print out the commands instead of executing them?
For our continuous integration setup in Github, we're noticing that the wall clock times of our simple build & test jobs have gone up in our pending switch to cargo-all-features
. I suspect that it's due to all of the different combinations of feature sets being run serially.
In our CI, we have the ability to run jobs concurrently in order to save on wall clock time. In fact, our build & test job is set up that way using a "job matrix" in which we indicate which variables take on which enumerated sets of values, and the CI system creates the combinatorial number of jobs according to the Cartesian product of the enum sets. We currently run on 3 different OSes, and our enumerated set of feature sets is ["", "--all-features"]
, where the empty string actually refers to the default set.
So if there was a way to obtain the series of commands used to test each unique combination of features, then it would be possible to cut down on our wall clock time by a significant factor.
Does cargo-all-features
work well with specifying parallel jobs using the -j
option in cargo? Our CI system uses 2-core CPUs, which is something.
Independent of that, however, is the fact that the max distributed job concurrency allowed by the CI is 5 jobs. So having the sequence of commands used per unique combination of features could enable that improvement, too.
@sffc
So if there was a way to obtain the series of commands used to test each unique combination of features, then it would be possible to cut down on our wall clock time by a significant factor.
Not currently, but it can be added! Are you thinking something like cargo test-all-features --print-commands
that prints all the cargo test
commands that will be invoked (one per line)?
Yes, that's what I was thinking. That would be great to add, thanks!
I don't think that it should run multiple feature sets in parallel. Instead each feature set should already be using the maximum allowed jobs. Perhaps a jobs flag that simply forwards its argument to cargo but only does one cargo run at a time.