Richard Harris
Richard Harris
ah - I see. In this case please create a list of missing read ids from the first merged output and all inputs using pod5 view. ```bash # get read...
I recommend using a python virtual environment instead of a conda environment: ```bash python3.10 -m venv venv --prompt=pod5 source venv/bin/activate pip install -U pip pod5 pod5 --version ```
Hi @SimonChen1997, It looks like the original base calling job crashed. This is why you have very little output. ``` terminate called after throwing an instance of 'std::runtime_error' what(): Empty...
Hi @JannesSP, Thanks for raising this issue - we will investigate this and get back to you soon. Best regards, Rich
We've identified the issue and will put out a patch soon - thanks for your patience.
Hi @mcrone, can you reduce the batch size to see if this is a GPU issue?
Duplicated: https://github.com/nanoporetech/dorado/issues/1041
Hi @mulderdt, we're working on tuning the auto batchsize calculation to deliver both stability and performance which is challenging over many hardware variants. You found the appropriate solution though which...
@liuyang2006, reduce the batch size: Auto batchsize used 6336. Try `5120` ```bash dorado basecaller --batchsize 5120 ... > calls.bam ```
@claczny, It's just about [10% less](https://dorado-docs.readthedocs.io/en/latest/troubleshooting/troubleshooting/#cuda-out-of-memory) to reduce GPU memory.