dorado
dorado copied to clipboard
M3 Mac Benchmarks
Are there any M3 Macbook benchmarks available? The press release only mentions that the basecalling is x times faster than on an Intel-based Mac which would not have had GPU acceleration (so not really a fair comparison).
Useful data here would be whether it is possible to do SUP model live basecalling and whether it could even run multiple MinIONs at once using different basecalling models.
I'm looking to get an M3 Max specifically for this application and I want to get a view on whether it's a good idea or not.
Hey @mcrone we've just had an M3 Max arrive so we can update you with some benchmarking next week.
Sounds great! Looking forward to it!
@mcrone here a some performance numbers from a 40 core M3 Max MacBook Pro.
fast 4.83e+07
hac 4.37e+06
sup 5.12e+05
Thanks @iiSeymour. In practice would this allow for live sup model basecalling? (my assumption is yes?) I'm just not sure exactly how to convert samples/second to what this would be in practice.
@mcrone samples/s / (1.5e9 bases / 72 / 3600 * 5kHz / 400bps)
sample/s device keep up at 15GBases / 72h
fast 4.83e+07 66.8
hac 4.37e+06 6.04
sup 5.12e+05 0.71
Are there more benchmarks like this for other GPUs?
I’m considering building a deskside system to use for sup basecalling with dorado. I saw Ryan Wicks Onion post and I was wondering if the RTX 4090 really beats the RTX 6000 Ada generation when basecalling. Does anyone know?
What would it take to keep up with sup basecalling using dorado with a P2?
On Jan 2, 2024, at 09:39, Chris Seymour @.***> wrote:
@mcrone https://github.com/mcrone samples/s / (1.5e9 bases / 72 / 3600 * 5kHz / 400bps)
sample/s device keep up at 15GBases / 72h
fast 4.83e+07 66.8 hac 4.37e+06 6.04 sup 5.12e+05 0.71 — Reply to this email directly, view it on GitHub https://github.com/nanoporetech/dorado/issues/492#issuecomment-1874338226, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABMXPRR3EFOYYQNBSMZR2I3YMRA6PAVCNFSM6AAAAABAA5LF26VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNZUGMZTQMRSGY. You are receiving this because you are subscribed to this thread.