Add dream module
PR checklist
Closes https://github.com/nf-core/differentialabundance/issues/363
- [ ] This comment contains a description of changes (with reason).
- [ ] If you've fixed a bug or added code that should be tested, add tests!
- [ ] If you've added a new tool - have you followed the module conventions in the contribution docs
- [ ] If necessary, include test data in your PR.
- [ ] Remove all TODO statements.
- [ ] Emit the
versions.ymlfile. - [ ] Follow the naming conventions.
- [ ] Follow the parameters requirements.
- [ ] Follow the input/output options guidelines.
- [ ] Add a resource
label - [ ] Use BioConda and BioContainers if possible to fulfil software requirements.
- Ensure that the test works with either Docker / Singularity. Conda CI tests can be quite flaky:
- For modules:
- [ ]
nf-core modules test <MODULE> --profile docker - [ ]
nf-core modules test <MODULE> --profile singularity - [ ]
nf-core modules test <MODULE> --profile conda
- [ ]
- For subworkflows:
- [ ]
nf-core subworkflows test <SUBWORKFLOW> --profile docker - [ ]
nf-core subworkflows test <SUBWORKFLOW> --profile singularity - [ ]
nf-core subworkflows test <SUBWORKFLOW> --profile conda
- [ ]
- For modules:
Nico, I found several issues and errors in this original module. I've been re-writing a little bit differently. I am going to push my changes and suggestions here + the migration to the template form.
@nschcolnicov feel free to check if these changes suit the pipeline structure and what you and Alan envisioned and if all looks good to you we could ask for reviewers.
Ping @grst for final review
The errors seem to be unrelated to this module:
- The linting failing is in
differential_functional_enrichment. It seems like empty md5sums are now detected. I did not change the snapshot or the files here. These empty files seem to previously be in the snapshot. - The tests failing are also unrelated to the changes in this module. For example, it looks like this file
"treatment_mCherry_hND6_sample_number_test_limma_voom.limma.results.tsvand this filetreatment_mCherry_hND6__test_limma_voom.limma.results.tsvhave different md5sums. These failing tests seem to be the same as here: https://github.com/nf-core/modules/pull/7870, where there was a very simple, minimal change in thetag(should not affect any result). It seems like tests related to limma seem to fail?
Guys, I have a huge problem with the dream treatment in the differential subworkflow here.
The idea of that subworkflow is that differential methodology is dictated by channel content. There shouldn't be a custom input channel for dream, or a custom output channel, or the calling workflow will have to do different things for the dream case, which is very bad.
Side note: could you please make sure you seek review from a listed maintainer when PR'ing?
Agree that dream shouldn't have separate channels. Sorry, I didn't pay close attention to the subworkflow, I was more looking at R script.
Also worth mentioning that we see dream as a test run for providing the model specification as a formula (because you can't easily specify a mixed effect with blocking factors), so it's worth making sure this part also matches your expectations, @pinin4fjords