coq-nix-toolbox
coq-nix-toolbox copied to clipboard
Regrouping action jobs into one
Instead of creating one job per package, I'd rather have the action launch three nix-builds: one for the dependencies, one for the main packages and one for the reverse dependencies, with a --keep-going and logging of each build, together with the printing of a summary (c.f. https://stackoverflow.com/questions/67507373/how-to-attach-a-markdown-page-to-github-actions-workflow-run-summary for example) at the end of the action.
- Pros: no limit on the number of jobs, no caching required within an action, less downloads, less opportunities of failure & delays on nix/cachix installs, more ecological. Thus it would also justify the removal the
--jobargument which was pretty artificial in the end, in favor of the more standard-Aoption. - Cons: we only get the summary in the very end, so in case of early failure, we have to wait (unless there is a way to update the summary live?). We have to work a little bit to pretty print a readable output. (We could even have a dot graph with clickable entries and links to the respective build logs). I'm not sure git keeps track of error locations though...
@Zimmi48 WDYT?
I'm not sure if I get this. You want to do this change in all cases? There is an obvious major drawback that is less parallelism: jobs allow independent builds run in parallel, which is more limited if we launch everything in a single runner. So while this might make sense for simple cases, it doesn't seem to make sense in general...
Right... though, it's better to have one runner than several with no caching... Which happens everytime one updates their actions :-/... I do not know what to think.
Which happens everytime one updates their actions
If this is done from a fork, yes. And this is an issue indeed. Would it be reasonable / doable to (dynamically) detect if caching is available and to use one strategy or the other based on this (the two "sets" of jobs could be already defined in the workflow, but only a subset would be triggered...)?
Which happens everytime one updates their actions
If this is done from a fork, yes. And this is an issue indeed. Would it be reasonable / doable to (dynamically) detect if caching is available and to use one strategy or the other based on this (the two "sets" of jobs could be already defined in the workflow, but only a subset would be triggered...)?
Yes it is technically possible indeed!