rnaseq icon indicating copy to clipboard operation
rnaseq copied to clipboard

Add umicollapse and benchmark it against umitools dedup

Open MatthiasZepper opened this issue 2 years ago • 32 comments

Description of feature

umicollapse is a Java tool and nf-core module so far not used by any pipeline.

It was specifically written as a more performant tool, which supposedly produces output equivalent to umi-tools dedup, but with significantly shorter runtimes (minutes instead of hours).

Therefore, it might make sense to switch from umi-tools dedup to umicolllapse, or at least offer it as an alternative route, similar to the coexistence of TrimGalore / FastP in the future.

To advance this development, please:

  • Familiarize yourself with umicollapse and its settings.

  • Create a new branch of the pipeline in your fork and add the umicollapse module to it, replacing the umi-tools dedup step. Update the module config accordingly.

  • Benchmark the modified pipeline against version 3.12 when run on data with UMIs, comparing runtimes and quantification results. Possible benchmarking datasets are e.g. GSE207129 or GSE179992 with those parameters

    with_umi: true 
    umitools_extract_method: "regex"
    umitools_bc_pattern2: "^(?P<umi_1>.{8})(?P<discard_1>.{6}).*"
    skip_markDuplicates: True
    

    or Experiment 2 of Series GSE171427 when run with

       with_umi : true
      umitools_extract_method : 'regex'
      umitools_bc_pattern : "^(?P<umi_1>.{12}).*"
      skip_markDuplicates: true
      clip_r2: 9
    

    I think, this is a suitable Hackathon task for a more advanced nf-core contributor. Thanks!

MatthiasZepper avatar Oct 05 '23 13:10 MatthiasZepper

Hi there, I wrote the module! We are using it in our updated nf-core/clipseq pipeline (not released yet). It is able to handle much larger bams and I have benchmarked that it gives identical results. So hopefully this should be an easy issue.

CharlotteAnne avatar Oct 10 '23 07:10 CharlotteAnne

Great work and thank you so much for the info!

So then this issue comes down to writing a respective subworkflow similar to bam_dedup_stats_samtools_umitools, but using umicollapse instead.

Maybe @drpatelh can give a hint, if he prefers replacing umi-tools dedup or leaving it to the user to choose in that case?

MatthiasZepper avatar Oct 10 '23 11:10 MatthiasZepper

Be interested to see what your implementation looks like @CharlotteAnne ! But adding an independent subworkflow either way as you suggested @MatthiasZepper is a good idea. We can then decide how we slip it into the pipeline.

drpatelh avatar Oct 11 '23 18:10 drpatelh

you can see in the https://github.com/nf-core/clipseq/tree/feat-2-0 branch of clipseq. we dont used the subworkflow because the samtools stats clutters up our pipeline output without giving any real useful information

CharlotteAnne avatar Oct 11 '23 18:10 CharlotteAnne

actually thats a lie lol at the moment we do https://github.com/nf-core/clipseq/blob/feat-2-0/subworkflows/local/bam_dedup_samtools_umicollapse.nf but im gonna refactor becuz of samtools stats clutter

CharlotteAnne avatar Oct 11 '23 18:10 CharlotteAnne

Hello! After a conversation with @MatthiasZepper over Slack I decided I would get hands-on on this issue. I've already created a branch on a fork of the repo and added the functionality for the user to select whether they want to deduplicate using umitools or umicollapse. Then I added the umicollapse module to the pipeline and @CharlotteAnne 's subworkflow from clipseq.

I have not made the pull request yet because I have not tested it and benchmarked, but feel free to "read" the changes in the code and give all the feedback you want. I'll keep you posted with the results once I have tested :)

ctuni avatar Nov 23 '23 12:11 ctuni

The added umicollapse process is working correctly, but I had to comment out the line where the .command.log is copied to a file because I got an error about the file not existing... And also had to comment out the lines referencing that log file downstream. Maybe @CharlotteAnne has some idea about how to correctly generate the log for the process? I think a solution could be to pipe the STDOUT of the umicollapse command since it's very informative to the log file.

I'm testing only that it currently runs and completes with two samples of the first dataset linked, subsampled to 10k reads (I'm testing it locally :sweat_smile: ), but would love to really benchmark this implementation with more real data. Any suggestion or idea on how could we do this?

ctuni avatar Nov 24 '23 09:11 ctuni

@ctuni : Thank you so much for all your work and effort! Greatly appreciated! We have some sequencing data from human XpressRef Universal Total RNA in different input concentrations and kits, which I could run with your pipeline for comparison.

It will take a couple of days, but then we have a relatively large dataset with a known ground truth to compare against. I could share e.g. the salmon quant files with you for your own analysis?

MatthiasZepper avatar Nov 24 '23 14:11 MatthiasZepper

To give you a small update about my progress (or the lack thereof :grimacing:).

I fetched @ctuni 's modification to the pipeline based on @CharlotteAnne 's subworkflow, but felt that for evaluation purposes, it would be better to compare the tools on exactly the same BAM files instead of realigning the same reads again.

Therefore, I tried to .tap() the BAM transcriptome channel, modify the meta.id for the branched samples with an additional suffix, run both tools in parallel and later mix the output channels again.

With the same approach, I planned to also get a close comparison of the effects that the Prepare for Salmon process has, since the whole undertaking started in Slack with doubts about that script: #828.

Thus, essentially getting 4 quantifications (umitools, umicollapse) x (prepare for salmon, skip prepare for salmon) for the same BAM alignment. The pipeline modified for test purposes resides in my fork, but I keep getting out of memory errors for umicollapse.

Even 256GB heap doesn't seem to be enough for a single pass run:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
  	at java.base/java.util.regex.Matcher.<init>(Matcher.java:248)
  	at java.base/java.util.regex.Pattern.matcher(Pattern.java:1134)
  	at umicollapse.util.SAMRead.getUMI(SAMRead.java:34)
  	at umicollapse.main.DeduplicateSAM.deduplicateAndMerge(DeduplicateSAM.java:109)
  	at umicollapse.main.Main.main(Main.java:221)

Therefore, I still have to play around with the Java Virtual Machine Memory settings for initial heap size, max heap size and stack size and probably set --two-pass as default argument in the module.config.

MatthiasZepper avatar Jan 03 '24 11:01 MatthiasZepper

Thank you for this effort. Please let me know if I can help in some way. I would appreciate being able to finish the entire pipeline in under 2 hours, instead of nearly the whole day that it takes right now.

Screenshot 2024-08-03 at 12 34 01 PM

siddharthab avatar Aug 03 '24 19:08 siddharthab

Of course, you can help! Much appreciated!

The subworkflow with umicollapse is done, but I was holding back with integrating it into the rnaseq pipeline, since it to me seemed that release 3.15 was immanent and more urgent tasks had to be done. But since the big refactor is done, you can probably now integrate it without the risk of building on top of a heavily changing foundation with many moving parts now (*hopefully)

Hoe about assigning yourself to this task and taking over?

MatthiasZepper avatar Aug 14 '24 12:08 MatthiasZepper

Smrnaseq has it implemented too - if you're looking for insights how it was implemented there, have a look 👍

apeltzer avatar Aug 14 '24 17:08 apeltzer

Thank you! I don't have permissions to change assignments. Please do so if you can. I will send a PR shortly.

siddharthab avatar Aug 14 '24 17:08 siddharthab

Great! Please join the nf-core GitHub organization asap, so you can better interact with the different repos and issues! I think e.g. @drpatelh would be happy to send you an invitation, since you are contributing to the rnaseq pipeline.

MatthiasZepper avatar Aug 15 '24 10:08 MatthiasZepper

Running a little behind on this, but still on my list, somewhere near the top.

siddharthab avatar Aug 21 '24 22:08 siddharthab

Running a little behind on this, but still on my list, somewhere near the top.

Please don't stress yourself over it. I have been neglecting this for months and without your help, it would probably not make it anywhere soon either. So take your time, it will anyway not be included in the next release 3.15 anymore (for which a feature freeze is essentially in place) and the 3.16 release is still a few weeks ahead!

MatthiasZepper avatar Aug 22 '24 09:08 MatthiasZepper

WIP PR at #1369. I think we definitely have to use --two-pass, and I am not sure how good umicollapse is with paired reads.

siddharthab avatar Sep 03 '24 06:09 siddharthab

Crude benchmark results from processing GSE207129. Note that these are paired reads. Execution timeline reports attached.

Some notes:

  1. --paired uses considerably more memory and time, and processing transcriptomes is much worse than with umitools. With --two-pass, it takes 9-10 hours for umicollapse instead of ~30 minutes for umitools. Processing genomes takes roughly half the time (but with 10x more memory).
  2. --two-pass keeps memory requirements low. For a 3.2 GB transcriptome sorted BAM file with paired reads, --two-pass takes 15 GB instead of 75 GB RAM.

Does anyone have any thoughts on what I could be doing wrong here?

Execution Timeline.zip

siddharthab avatar Sep 03 '24 18:09 siddharthab

I couldn't see easily - are you testing with data that has UMIs? how long are the UMIs and what is the PCR deduplication ratio of the dataset? With CLIP we found similar, that sometimes UMIcollapse can be comparably inefficient but it works for tricky large BAM files where umitools just gives up. Probably a question for the tool author.

On Tue, 3 Sept 2024 at 19:18, Siddhartha Bagaria @.***> wrote:

Crude benchmark results from processing GSE207129 https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE207129. Note that these are paired reads. Execution timeline reports attached.

Some notes:

  1. --paired uses considerably more memory and time, and processing transcriptomes is much worse than with umitools. With --two-pass, it takes 9-10 hours for umicollapse instead of ~30 minutes for umitools. Processing genomes takes roughly half the time (but with 10x more memory).
  2. --two-pass keeps memory requirements low. For a 3.2 GB transcriptome sorted BAM file with paired reads, --two-pass takes 15 GB instead of 75 GB RAM.

Does anyone have any thoughts on what I could be doing wrong here?

Execution Timeline.zip https://github.com/user-attachments/files/16853028/Execution.Timeline.zip

— Reply to this email directly, view it on GitHub https://github.com/nf-core/rnaseq/issues/1087#issuecomment-2327145118, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVBH3JFBKJ2GTZBW62K4FLZUX4PBAVCNFSM6AAAAABL6E574SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRXGE2DKMJRHA . You are receiving this because you were mentioned.Message ID: @.***>

CharlotteAnne avatar Sep 03 '24 18:09 CharlotteAnne

Yes, the data has 8 bp long UMIs. This is what umi-tools says:

Reads: Input Reads: 44319354, Read pairs: 44319354
Number of reads out: 32494556
Total number of positions deduplicated: 30086237
Mean number of unique UMIs per position: 1.11
Max. number of unique UMIs per position: 147

siddharthab avatar Sep 03 '24 20:09 siddharthab

Ah I see! These files have many more reads than we would typically process from CLIP, but our PCR deduplication ratio is higher and the UMIs can be longer, which both also impact the memory requirement. Perhaps the smallRNASeq folk would be better able to advise and the tool author himself on his github repo.

On Tue, 3 Sept 2024 at 21:13, Siddhartha Bagaria @.***> wrote:

Yes, the data has 8 bp long UMIs. This is what umi-tools says:

Reads: Input Reads: 44319354, Read pairs: 44319354 Number of reads out: 32494556 Total number of positions deduplicated: 30086237 Mean number of unique UMIs per position: 1.11 Max. number of unique UMIs per position: 147

— Reply to this email directly, view it on GitHub https://github.com/nf-core/rnaseq/issues/1087#issuecomment-2327355503, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFVBH3KZYMFLRZQVIHWRVMDZUYKADAVCNFSM6AAAAABL6E574SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRXGM2TKNJQGM . You are receiving this because you were mentioned.Message ID: @.***>

CharlotteAnne avatar Sep 03 '24 21:09 CharlotteAnne

Filed https://github.com/Daniel-Liu-c0deb0t/UMICollapse/issues/31 to get a response from the tool author.

If paired ends is indeed this slow, my suggestion would be to continue to keep umi-tools the default and provide UMICollapse only as an option manually selected, probably without integration with multiqc.

siddharthab avatar Sep 03 '24 22:09 siddharthab

@apeltzer Have you ever run UMICollapse in paired end mode? It seems like the command line the pipeline would form is incorrect (--method should be --algo) and so maybe UMICollapse was never tested in paired mode?

https://github.com/nf-core/smrnaseq/blob/5901bea438cb719c186f883bb6e6a11f1de70a5b/conf/modules.config#L143

siddharthab avatar Sep 03 '24 22:09 siddharthab

After fixing the issue in UMICollapse, in general, UMICollapse takes ~60% of the wall time of umi-tools. We will have to wait for the fix to be accepted and a new release in bioconda.

siddharthab avatar Sep 04 '24 07:09 siddharthab

Smrnaseq only runs on SE data so the PE was never used. As smrnaseq data is so short, paired end protocols do not make any sense for smrna data. For SE data we saw quite substantially better performance than what we had with umitools.

apeltzer avatar Sep 04 '24 08:09 apeltzer

Wow, one night of good sleep and one morning of not paying attention to GitHub and meanwhile this happened...🤯.

I have not yet reviewed your PR, but from briefly glimpsing over this conversation I already got the impression that you, @siddharthab, made an outstanding contribution! Thank you so much for your PR to the pipeline, including an extensive QC and profiling!

I am flabbergasted by your perspicaciousness - identifying the reason for the slow transcriptome deduplication in the tool's code in no-time and fixing it right away! Brilliant!

MatthiasZepper avatar Sep 04 '24 11:09 MatthiasZepper

@CharlotteAnne, looks like the conda recipe uses your forked repo. Would you be willing to update the recipe to use a released version from your or mine repo until the author merges the change?

siddharthab avatar Oct 03 '24 23:10 siddharthab

Since the Bioconda recipe anyway points to a fork of the original repo, I see no problem with changing it to your fork @siddharthab. If you could make a 1.0.1 release that includes the transcriptomic dedup bug fix, I can update the Bioconda recipe to your fork and to the correct SHA.

MatthiasZepper avatar Oct 08 '24 13:10 MatthiasZepper

@MatthiasZepper, created https://github.com/bioconda/bioconda-recipes/pull/51441.

siddharthab avatar Oct 16 '24 18:10 siddharthab

Thanks a lot for all the work you are investing in that! Unfortunately, I could not approve your PR to Bioconda, but I found somebody who could!

MatthiasZepper avatar Oct 17 '24 15:10 MatthiasZepper