Allow Passing --chunksize Parameter to Dorado Server
Hi @Psy-Fer,
I'm using buttery-eel to process BLOW5 files with Dorado, but I encountered an issue where Dorado runs a full benchmarking process for my GPU (NVIDIA GeForce RTX 4060 Ti) due to missing predefined chunk benchmarks:
2025-03-04 00:20:34.166195 [ont/warning] Unable to find chunk benchmarks for GPU "NVIDIA GeForce RTX 4060 Ti", model /opt/pkgs/ont-dorado-server-7.6.8/data/[email protected] and chunk size 1500. Full benchmarking will run for this device, which may take some time.
Since buttery-eel does not expose an option to pass --chunksize to Dorado, users currently cannot manually specify this parameter to avoid full benchmarking on every basecalling run.
Proposed Feature
Would it be possible to add an option in buttery-eel to pass --chunksize to Dorado when launching the server?
For example:
buttery-eel --chunksize 1500 ...
Alternatively, if there's already a way to configure this through an existing parameter, could you point me in the right direction?
Thanks for your time! Looking forward to your thoughts.
Best, Elton
Hey,
So previously this was an option to run in buttery-eel, however I removed it as it was a passthrough arg you could send to the server much like --use_tcp.
But then ONT removed this option along with batchsize in the newer dorado-server releases.
While there is still a chunk size argument in the API, they describe it like this
* ``chunk_size`` `(int)` For adaptive sampling. Specify the chunk-size for basecalling. If you are
| truncating reads, send the value you are truncating to.
Other than this, there isn't anything else that can set batch or chunk size, and it's now down to samples to package and number of reads to process, the later of which is overwritten by buttery-eel.
You could always ask ONT to add a profile for the 4060 Ti card. I get the same message on my 3050 Ti in my laptop during initial testing of dev builds, and it doesn't seem to take very long, only about 30s or so. How long is it taking for you?
Cheers, James
Oops, forgot to respond..
Thanks for the info! Closing issue as complete