Use new nf-test features
- ci: Attempt to split everything out
- ci: Add changed since, sharding, and ci
- ci: Add filter to try to get jobs split up
- ci: Switch to only-changed
- ci: See if follow-dependencies works without "related-tests"
We do have adamrtalbot/detect-nf-test-changes as a fallback option?
That's what we're currently using, so I don't really see it as a fallback :laughing: It works fine, I think the sharding can work better and waste less resources spinning up a runner for each test instead of spinning up a consistent amount per PR.
That would close #391 as well, and might encourage people to make smaller PRs if we're only giving them 9 CI runners per PR anyway.
Current things blocking this is https://github.com/nf-core/tools/issues/3140 or we can give up and keep the paths-filter step for that
Removed the need to use nf-core lint with pre-commit, as that wasn't truely blocking this from getting merged.
We can make a follow up issue to run every linting step through pre-commit.
Looking good, there's still some leftover comments in the nf-test yml, but I like it
@maxulysse I think I've address the comments that can be addressed.
@adamrtalbot I think this goes slightly against https://github.com/nf-core/modules/pull/4545#issuecomment-1845027257 but I think it's worth splitting up since we should be running pytest-workflow less and less, and the sharding is a completely different flow. Though it will probably Start up the pytest-workflow workflow every PR...
Also - the GHA says it uses actions/setup-java v4 but so does master and they're in conflict 😬
You should add v4.4.0 for that action as a comment since you are referencing an explicit commit.
Stress test: https://github.com/nf-core/modules/pull/6716
number of tests before running out of space:
| test | method | tests |
|---|---|---|
| process | conda 2 | 33+ |
| process | conda 3 | 29+ |
| process | docker_self_hosted 1 | 18 |
| process | docker_self_hosted 2 | 30+ |
| process | docker_self_hosted 3 | |
| process | singularity 3 | 37+ |
Then GHA started clearing out the logs so I couldn't count 😒 .
Still, it looks like 30 is the max for a normal set of process tests. Workflows are fine.
Still, it looks like 30 is the max for a normal set of process tests. Workflows are fine.
What's your take on it then? Is 30 enough?
What if we used
strategy:
max-parallel: 2
or 3 or whatever we decide. Then we could do 6 or 7 for the number of shards and ideally the if there's a small number of tests then they'll finish quickly, but one massive PR won't hold up the CI runners from other small PRs.
What if
$ nf-test test --dry-run --filter process --changedSince origin/master
🚀 nf-test 0.9.0
https://www.nf-test.com
(c) 2021 - 2024 Lukas Forer and Sebastian Schoenherr
Load .nf-test/plugins/nft-bam/0.3.0/nft-bam-0.3.0.jar
Load .nf-test/plugins/nft-vcf/1.0.7/nft-vcf-1.0.7.jar
Detected 9 changed files
• .github/conda_skip.yml
• .github/workflows/lint.yml
• .github/workflows/nf-test.yml
• .github/workflows/pytest-workflow.yml
• .github/workflows/test.yml
• .pre-commit-config.yaml
• modules/nf-core/bowtie/align/main.nf
• modules/nf-core/bowtie/build/tests/main.nf.test
• modules/nf-core/bowtie/build/tests/main.nf.test.snap
Found 2 related test(s)
• /Users/emiller/src/nf-core/modules/modules/nf-core/bowtie/build/tests/main.nf.test
• /Users/emiller/src/nf-core/modules/modules/nf-core/bowtie/align/tests/main.nf.test
Dry run mode activated: tests are not executed, just listed.
Test Process BOWTIE_ALIGN
Test [66a82e02] 'sarscov2 - single_end' PASSED (0.0s)
Test [5cac2e90] 'sarscov2 - single_end - stub' PASSED (0.0s)
Test [c7794ac1] 'sarscov2 - paired_end' PASSED (0.0s)
Test [c3125b42] 'sarscov2 - paired_end - stub' PASSED (0.0s)
Test Process BOWTIE_BUILD
Test [9a210173] 'sarscov2 - fasta' PASSED (0.0s)
Test [706b144c] 'sarscov2 - fasta - stub' PASSED (0.0s)
SUCCESS: Executed 6 tests in 0.06s
We parse this for the related tests line, then pick the number of shards based off the number of tests found.
We parse this for the related tests line, then pick the number of shards based off the number of tests found.
that would work, but we probably can do something smarter than that. But that's good enough for me now
Per slack things that were holding this PR back:
- the number of jobs we can pack into a GitHub runner https://github.com/nf-core/modules/pull/6716
- A nice summary of tests that ran and more specifically tests that failed. https://github.com/nf-core/sarek/pull/1701
@SPPearce:
Because this should reduce the number of spurious tests that run because people haven't merged in the most recent commit, which might help with the runners in general
IMO
- becomes too much of an issue we figure out a fix, and we just bump up the number of shards until it's not a problem most of the time.
- I think is nice, and needed, but if this is blocking up CI runners then yeah I want this PR merged.
Tests are only failing because of #6664
Going to merge since the broken tests aren't in the scope, and hopefully I catch any buygs before Europe wakes up 🤞🏻