no Corrected reads created
Dear Canu team,
We had contacted you last year regarding the assembly of amplicon and you had kindly answered. Our test with amplicon of size>1000 showed promising results. We now tested we new data of amplicon ~400bp and the output is more erratic but mostly empty. We are getting this error message for certain samples while do produce a contig file:
-- Finished on Thu Mar 21 14:54:37 2024 (one second) with 486.595 GB free disk space
----------------------------------------
-- Found 1 read correction output files.
-- Finished stage 'cor-generateCorrectedReadsCheck', reset canuIteration.
-- Found 1 read correction output files.
-- Finished stage 'cor-generateCorrectedReadsCheck', reset canuIteration.
--
-- Loading corrected reads into corStore and seqStore.
----------------------------------------
-- Starting command on Thu Mar 21 14:54:37 2024 with 486.595 GB free disk space
cd correction
/usr/local/bin/loadCorrectedReads \
-S ../barcode07.seqStore \
-C ./barcode07.corStore \
-L ./2-correction/corjob.files \
> ./barcode07.loadCorrectedReads.log \
2> ./barcode07.loadCorrectedReads.err
-- Finished on Thu Mar 21 14:54:37 2024 (like a bat out of hell) with 486.591 GB free disk space
----------------------------------------
--
-- No corrected reads generated; correctReads output saved.
--
-- Purging overlaps used for correction.
-- Finished stage 'cor-loadCorrectedReads', reset canuIteration.
----------------------------------------
-- Starting command on Thu Mar 21 14:54:37 2024 with 486.598 GB free disk space
cd .
/usr/local/bin/sqStoreDumpFASTQ \
-corrected \
-S ./barcode07.seqStore \
-o ./barcode07.correctedReads.gz \
-fasta \
-nolibname \
> barcode07.correctedReads.fasta.err 2>&1
-- Finished on Thu Mar 21 14:54:37 2024 (in the blink of an eye) with 486.597 GB free disk space
----------------------------------------
--
-- Corrected reads saved in 'barcode07.correctedReads.fasta.gz'.
-- Finished stage 'cor-dumpCorrectedReads', reset canuIteration.
--
-- Trimming skipped; no corrected reads exist in barcode07.seqStore.
--
-- Unitigging skipped; no corrected reads to assemble.
--
-- Bye.
canu v2.2
canu \
-p barcode05 \
-nanopore \
genomeSize=1000 \
'maxInputCoverage=100' 'minReadLength=150' 'minOverlapLength=50' contigFilter='3 0 1.0 0.8 0' 'stopOnLowCoverage = 0' 'corMhapSensitivity=high' \
maxThreads=12 \
barcode05.trimmed.fastq.gz
gzip *.fasta
System: Centos, ran within nextflow.
thanks for the help
Just to complete my question, these are amplicons with 50K+ reads .
There was an issue in v2.2 correcting reads less than 500bp, see https://github.com/marbl/canu/issues/2182. You can either use the latest tip of canu or wait for a release of v2.3 which should be done in the next couple of weeks.
Thank you very much, I will update it then. Thanks for the work.
Question answered
I have tried to find a way to use the dev branch but the github docs say that it is not possible to compile from the github version and I really need to use the current updates how could I implement the newest changes? @skoren
The docs say that it is not possible to compile from downloaded .zip source code, but it should work if the repository is cloned; have you tried that?
Yes, @gringer is correct. You can checkout and compile the latest unreleased code following the instructions here: https://github.com/marbl/canu?tab=readme-ov-file#install. Starting with git clone.
Ok I managed to compile from the repo into a singularity container (ubuntu), but now I am getting this error
ABORT:
ABORT: canu snapshot v2.3-development +162 changes (r10433 c61ebbb7a5f90abb9c034650e2fd95642138de31)
ABORT: Don't panic, but a mostly harmless error occurred and Canu stopped.
ABORT: Try restarting. If that doesn't work, ask for help.
ABORT:
ABORT: failed to find the number of jobs in 'unitigging/0-mercounts/meryl-count.sh'.
ABORT:
Any idea if it's something on my end?
thanks a lot for the support,
Can you open a new issue and post the command you're using and the full canu output, including the report file it generates?
Alright, I opened a new issue, thanks for the support!