Flye stops after minimap2 without any error message
Hello,
i am facing the problem, that the Flye assembly does not compute the consensus sequence. It runs minimap2 and it appears to finish that step. The command ran was: flye --nano-raw /vol/data/20240116_reads/ont_merged.fastq.gz --out-dir 20240119_flye --threads 30 --resume &
The 10-consensus folder contains the following files: total 57G -rw-rw-r-- 1 ubuntu ubuntu 2.1G Jan 26 09:34 chunks.fasta -rw-rw-r-- 1 ubuntu ubuntu 308K Jan 26 09:34 chunks.fasta.fai -rw-rw-r-- 1 ubuntu ubuntu 55G Jan 26 12:29 minimap.bam -rw-rw-r-- 1 ubuntu ubuntu 31M Jan 25 21:47 minimap.bam.bai
The flye log does not include any error messages it simply does not progress past this point:
[2024-01-26 09:33:54] root: INFO: >>>STAGE: consensus [2024-01-26 09:34:26] root: INFO: Running Minimap2 [2024-01-26 12:30:02] root: INFO: Computing consensus
It shouldn't be a memory issue as we have 3.5 TB available and I was monitoring the ram usage and it never went above 3% used. And the cache didnt fill up either. I attempted to resume the run but it restarted from minimap2 and stopped at the same point. Do you have any idea what else i could try?
Consensus computation may take a while, are you convinced that it has crashed? Is there any CPU activity? Does it work for you on smaller datasets? Any error messages in the terminal?
I believe it crashed, there's no more CPU activity and there is no more trace of flye on the job overview. I don't receive any error messages in the terminal or in any of the log files. It has previously worked for smaller datasets and i am able to run the example files without any issue. It just does not appear to move on from minimap2 for this dataset.
I think it is very likely that Python processes were killed by OS for some reason. This is system-specific and very hard to debug. I'd try to allocate more resources on your system (or reduce the number of threads in Flye). You can also check system logs for any Python instances that were killed.
Closed due to inactivity, feel free to reopen if you still need help with this issue!