gevro
gevro
pbmm2 (a port of minimap2) supports PacBio BAM files. https://github.com/PacificBiosciences/pbmm2
Perhaps. PacBio BAM files however contain additional tags with info on each read, which are lost in the process of converting to fastq.
I just confirmed by extracting the zmw hole #'s from each, and they are identical. So that can't be the reason. And no fastq file was created.
Sure. Do you have a dropbox? Or e-mail address and I can send a link.
I am using r0.3.1 docker and I downloaded model from the link in the quick start: gsutil cp -r gs://brain-genomics-public/research/deepconsensus/models/v0.3/model_checkpoint/* Is that not correct? Thanks.
Sure, here is the command: ``` singularity run -W /data -B /scratch/projects/lab/bin/deepconsensus/model:/model -B `pwd` /scratch/bin/deepconsensus/deepconsensus_0.3.1.sif deepconsensus run --batch_size=1024 --batch_zmws=100 --cpus 8 --max_passes 100 --subreads_to_ccs=subreads_to_ccs.bam --ccs_bam=ccs.bam --checkpoint=/model/checkpoint --output=output.deepconsensus.fastq ```
However, I checked and that downloads the same files I already have. What else could be the issue?
I'm still getting the same error. Maybe your docker deepconsensus_0.3.1 is the problem and has the wrong version of deepconsensus packaged inside? Can you double check using your docker? If...
Ok thanks. Is there any possibility to train a model for higher max_passes?
Also, now I'm getting a different error: ``` ================================================================= Total params: 8,942,667 Trainable params: 8,942,667 Non-trainable params: 0 _________________________________________________________________ I0802 23:05:13.125668 22923163895616 model_utils.py:231] Setting hidden size to transformer_input_size. I0802 23:05:13.125833...