Chris Seymour

Results 155 comments of Chris Seymour

Oh good, I suspected k80's given the previous performance and while 2.5+05 is better, it's still almost 10X slower than what should be possible on a V100 _(with the default...

Best to test both locations @devonorourke to compare against the baseline `~2.4E+05` you have already seen on V100s.

Hey @helix-phoenix Unfortunately, PyPI doesn't support prebuilt packages for `aarch64`.

Yes, because the new model train so much faster I hadn't used the multi-GPU training path with them - thanks for flagging.

With automatic mixed precision on a V100, a 768 wide model should take less than 8 hours for 5 epochs. Here's a training log on the currently available training data...

For sure! It's not on PyPI which is a pain however the functionality has been upstreamed into PyTorch since 1.6.

Hey @emmarl25 what does `echo $CUDA_VERSION` give you? Can you unset for the install.. ``` $ unset CUDA_VERSION $ pip install ont-bonito ```

What CPU arch is your cluster? Is it x86_64.. I could see this happening if you POWER9 for example. If it's x86_64 then maybe it's just an old pip issue....

The preprocessing step is required to construct the read group header in the SAM/BAM _(making this [always true](https://github.com/nanoporetech/bonito/blob/master/bonito/cli/basecaller.py#L86) would be a quick hack to skip it)._ The work is already...

Hello @libenping Yes, the mean qscore values in the summary file for Bonito are placeholders as this is not implemented. Bonito style CRF models will be the default models in...