sambamba
sambamba copied to clipboard
`Samtools + libdeflate` out performs `sambamba` on a single thread
Hello,
I recently heard about sambamba
and it's performance gains over samtools
, and was excited to compare it to samtools + zlib
and samtools + libdeflate
(I had also heard that libdeflate
really improves samtools
performance).
I compared all three configurations and you can see my full post here: Samtools sort: most efficient memory and thread settings for many samples on a cluster
In short, I compare overall performance (measured by time) at different CPU and memory options. I was impressed that sambamba
outperforms the other two in pretty much every configuration. There were two things I wanted to share directly that may be of interest:
- Using only one thread,
samtools + libdeflate
out performssambamba
which suggestssambamba
could be optimized even more at the compression steps (Fig. 1). You can comparesambamba
(red) andsamtools + libdeflate
(purple) at 1 CPU on the far left of Fig. 1. I'm not sure whatsambamba
uses for compression, though. I'm guessing it doesn't uselibdeflate
, otherwise I suspect it would have suffered from the same poor CPU utilization thatsamtools + libdeflate
suffered from with additional threads. Ifsambamba
is usingzlib
, however, I suspect you could really push the limits for manipulating.bam
files. - I also wanted to see how well each tool could utilize the CPUs allotted to it.
sambamba
does the best at utilizing allotted CPUs, but it also eventually flattens out. This is obviously a classic computer science problem, but thought you might like to see wheresambamba
flattens out. TBH, I doubt there's much incentive to optimize CPU usage any higher than 9 CPUs, anyway, but who knows?samtools + libdeflate
flattens out very quickly and is unable to fully utilize allotted CPUs as well as the other two configurations (Fig. 2). I assume this boils down tolibdeflate
, but maybe it's more complicated than that. I reported this on thelibdeflate
GitHub page so they can look into it.
And thank you for your work. We need more efficient tools like sambamba
!
Figure 1: Realtime vs CPU and Mem Per Thread for samtools + zlib
, samtools + libdeflate
(Lsamtools), and sambamba
Figure 2: Requested CPUs vs. CPU utilization for samtools + zlib
, samtools + libdeflate
(Lsamtools), and sambamba
Thanks. It is worth trying and should not be hard to test with guix.
I heard back on my post to libdeflate
. I don't know much about different compression methods, but here are key takeaways I had:
- The author of
libdeflate
doesn't think there's any reasonlibdeflate
itself would limit CPU usage. - He also said the "LZ4 compression format results in faster compression and decompression, but a worse compression ratio than DEFLATE." So, sounds like
libdeflate
won't be any faster. Maybe there's some other explanation for whysamtools + libdeflate
performed better with a single thread. May have been something I did (e.g., 10's of different threads reading from the same files?). MIght be worth looking over my code and testing it out yourself to verify.
Anyway, just wanted to report what I found in case it could be useful.
Oh, meant to include a link to the post at libdeflate
: https://github.com/ebiggers/libdeflate/issues/170