spades icon indicating copy to clipboard operation
spades copied to clipboard

spades-hammer finished abnormally, OS return value: -7

Open eltonjrv opened this issue 2 years ago • 7 comments

Description of bug

Dear Spades Team,

I'm getting this spades-hammer error after 38 hours of metaspades assembly process. ####### == Error == system call for: "['/nobackup/fbsev/bioinformatics-tools/SPAdes-3.15.4-Linux/bin/spades-hammer', '/nobackup/fbsev/LeedsOmics/DougStewart-Jan22/metaWRAP-run/ASSEMBLY_A/metaspades/corrected/configs/config.info']" finished abnormally, OS return value: -7 ####### Plus, as you will see through my spades.log, the "--restart-from last" argument is not working, as it restarted and overwrote the whole 84Gb content from the previous run output dir.

Any clue on that will be appreciated, Thanks, Elton

spades.log

spades.log

params.txt

params.txt

SPAdes version

3.15.4

Operating System

centOS

Python Version

3.7.4

Method of SPAdes installation

binaries

No errors reported in spades.log

  • [X] Yes

eltonjrv avatar May 20 '22 11:05 eltonjrv

Hello

You might be running out of RAM. Likely more information could be found in the system log. Maybe running --only-assembler will work

Plus, as you will see through my spades.log, the "--restart-from last" argument is not working, as it restarted and overwrote the whole 84Gb content from the previous run output dir.

This is expected. There are no checkpoints inside BayesHammer

asl avatar May 24 '22 09:05 asl

Thanks a lot! It worked with --only-assembler. Cheers, Elton

eltonjrv avatar May 30 '22 11:05 eltonjrv

Hi @asl , sorry to bring up an old thread, however we are hitting the same -7 exit code issue on spades-hammer, however I believe we are giving it sufficient memory.

    0:12:16.435   269M / 11G   INFO   K-mer Counting           (kmer_data.cpp             : 354)   Arranging kmers in hash map order
    0:12:29.767  5910M / 11G   INFO    General                 (main.cpp                  : 148)   Clustering Hamming graph.
    0:16:48.765  5910M / 11G   INFO    General                 (main.cpp                  : 155)   Extracting clusters:
    0:16:48.765  5910M / 11G   INFO    General                 (concurrent_dsu.cpp        :  18)   Connecting to root
    0:16:49.069  5910M / 11G   INFO    General                 (concurrent_dsu.cpp        :  34)   Calculating counts
    0:17:40.084    15G / 15G   INFO    General                 (concurrent_dsu.cpp        :  63)   Writing down entries
  
  
  == Error ==  system call for: "['/usr/local/bin/spades-hammer', '/fusion/s3/nf-core-awsmegatests/work/mag/work-32cc2cc274e1aa97e6b60d58760a79d3f1cf90e8/c9/95a994d0940d7299adde88e485f556/spades/corrected/configs/config.info']" finished abnormally, OS return value: -7
  None
  
  In case you have troubles running SPAdes, you can write to [email protected]
  or report an issue on our GitHub repository github.com/ablab/spades
  Please provide us with params.txt and spades.log files from the output directory.
  
  SPAdes log can be found here: /fusion/s3/nf-core-awsmegatests/work/mag/work-32cc2cc274e1aa97e6b60d58760a79d3f1cf90e8/c9/95a994d0940d7299adde88e485f556/spades/spades.log
  
  Thank you for using SPAdes!

This is the end of the log, however we have allocated 128GB of memory to the SPADES job, and if I understand correctly the step where it is crashing (writing down entries) is only using a peak of ~15GB of memory.

Googling exit code of 7, the only hit I've found is this one from libc, could this be relevant?

IF you have any further advice how to debug this error, this would be very helpful.

spades.log

spades.log

params.txt

params.txt

System information SPAdes version: 3.15.3 Python version: 3.9.6 OS: Linux-4.14.320-242.534.amzn2.x86_64-x86_64-with-glibc2.28 (note running on AWS)

Method of SPAdes installation

bioconda biocontainer (docker)

No errors reported in spades.log

  • [x] Yes

jfy133 avatar Sep 05 '23 15:09 jfy133

Hey! @jfy133 I've been encountering the same problem. In my case, I allocated 512GB and the max use of GB according to the log is 411GB, I would also assume that it should be enough. I was wondering if you found a solution or any other way around.

marianamnoriega avatar Sep 25 '23 17:09 marianamnoriega

No, unfortunately not :(.

It's defintely not a memory issue in this case... we have a suspicion it's something to do with how SPAdes is writing files but we still don't know...

jfy133 avatar Sep 26 '23 11:09 jfy133

I've also had the same issue, but even more extreme, to the point where I have allocated 950 gb of memory and the script exiting, claiming it could only access 450gb.

Nwilliams96 avatar Sep 26 '23 16:09 Nwilliams96

Adding my experience: I've run into the same issue after allocating 64GB even though the log says it was only using 21GB

pommevilla avatar Oct 17 '23 02:10 pommevilla