spades
spades copied to clipboard
spades-hammer finished abnormally, OS return value: -7
Description of bug
Dear Spades Team,
I'm getting this spades-hammer error after 38 hours of metaspades assembly process. ####### == Error == system call for: "['/nobackup/fbsev/bioinformatics-tools/SPAdes-3.15.4-Linux/bin/spades-hammer', '/nobackup/fbsev/LeedsOmics/DougStewart-Jan22/metaWRAP-run/ASSEMBLY_A/metaspades/corrected/configs/config.info']" finished abnormally, OS return value: -7 ####### Plus, as you will see through my spades.log, the "--restart-from last" argument is not working, as it restarted and overwrote the whole 84Gb content from the previous run output dir.
Any clue on that will be appreciated, Thanks, Elton
spades.log
params.txt
SPAdes version
3.15.4
Operating System
centOS
Python Version
3.7.4
Method of SPAdes installation
binaries
No errors reported in spades.log
- [X] Yes
Hello
You might be running out of RAM. Likely more information could be found in the system log. Maybe running --only-assembler
will work
Plus, as you will see through my spades.log, the "--restart-from last" argument is not working, as it restarted and overwrote the whole 84Gb content from the previous run output dir.
This is expected. There are no checkpoints inside BayesHammer
Thanks a lot! It worked with --only-assembler. Cheers, Elton
Hi @asl , sorry to bring up an old thread, however we are hitting the same -7
exit code issue on spades-hammer
, however I believe we are giving it sufficient memory.
0:12:16.435 269M / 11G INFO K-mer Counting (kmer_data.cpp : 354) Arranging kmers in hash map order
0:12:29.767 5910M / 11G INFO General (main.cpp : 148) Clustering Hamming graph.
0:16:48.765 5910M / 11G INFO General (main.cpp : 155) Extracting clusters:
0:16:48.765 5910M / 11G INFO General (concurrent_dsu.cpp : 18) Connecting to root
0:16:49.069 5910M / 11G INFO General (concurrent_dsu.cpp : 34) Calculating counts
0:17:40.084 15G / 15G INFO General (concurrent_dsu.cpp : 63) Writing down entries
== Error == system call for: "['/usr/local/bin/spades-hammer', '/fusion/s3/nf-core-awsmegatests/work/mag/work-32cc2cc274e1aa97e6b60d58760a79d3f1cf90e8/c9/95a994d0940d7299adde88e485f556/spades/corrected/configs/config.info']" finished abnormally, OS return value: -7
None
In case you have troubles running SPAdes, you can write to [email protected]
or report an issue on our GitHub repository github.com/ablab/spades
Please provide us with params.txt and spades.log files from the output directory.
SPAdes log can be found here: /fusion/s3/nf-core-awsmegatests/work/mag/work-32cc2cc274e1aa97e6b60d58760a79d3f1cf90e8/c9/95a994d0940d7299adde88e485f556/spades/spades.log
Thank you for using SPAdes!
This is the end of the log, however we have allocated 128GB of memory to the SPADES job, and if I understand correctly the step where it is crashing (writing down entries) is only using a peak of ~15GB of memory.
Googling exit code of 7, the only hit I've found is this one from libc, could this be relevant?
IF you have any further advice how to debug this error, this would be very helpful.
spades.log
params.txt
System information SPAdes version: 3.15.3 Python version: 3.9.6 OS: Linux-4.14.320-242.534.amzn2.x86_64-x86_64-with-glibc2.28 (note running on AWS)
Method of SPAdes installation
bioconda biocontainer (docker)
No errors reported in spades.log
- [x] Yes
Hey! @jfy133 I've been encountering the same problem. In my case, I allocated 512GB and the max use of GB according to the log is 411GB, I would also assume that it should be enough. I was wondering if you found a solution or any other way around.
No, unfortunately not :(.
It's defintely not a memory issue in this case... we have a suspicion it's something to do with how SPAdes is writing files but we still don't know...
I've also had the same issue, but even more extreme, to the point where I have allocated 950 gb of memory and the script exiting, claiming it could only access 450gb.
Adding my experience: I've run into the same issue after allocating 64GB even though the log says it was only using 21GB