Increase usablity on other systems by updating (SLURM)/scripts/juicer.sh
Juicer is supposed to be able to run on several systems. You have come a long way but there are some hardcoded bugs, more or less hidden in the scripts. This issue refers to (SLURM)/scripts/juicer.sh but may possibly apply for the other schedulers as well.
-
<blank-space>are not allowed before #SBATCH lines, like row 665, in the batch script parts (jid='sbatch << ...) and at more places (search for<blank>#SBAT)- error output is not spot on so troubleshooting this took a while!!!
- please change to
<tabs>
-
Possibly this applies to
<blank>$userstringas well! -
add "
-A project" as possible input instead ofuser
Did you run the SLURM scripts successfully? I really confused by it .
Yes, the whole workflow works today. I'm staff at a HPC center and had conversations with a researcher for more than half a year until everything worked. For each step we found new things. There were some updates for other tools needed in the later steps as well .
I see that I have modified:
- juicer.sh
- juicer_hiccups.sh
- split_rmdups_sam.awk
- link: juicer_tools.jar -> juicer_tools.jar.v2.13.07
I cannot give support for the tool. But would you like to see our solutions for our cluster and get inspired to your local changes?
The spaces thing was a PAIN to figure out! I am also re-writing this to work on our cluster and agree that all the $userstring lines were problematic because they had spaces instead of tabs before them.
Does anyone have any code here to help with this. Experiencing something similar on our cluster.
Hi @bclaremar I got stuck on HPC for days.
(-: Looking for fastq files...fastq files exist (-: Aligning files matching /staging/leuven/stg_00096/home/thangnx/LPS_WGS_shortread/HiC/opt/juicer/fastq/_R.fastq* in queue batch to genome draft with no fragment delimited maps. --- Using already created files in /staging/leuven/stg_00096/home/thangnx/LPS_WGS_shortread/HiC/opt/juicer/splits
(-: Starting job to launch other jobs once splitting is complete sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem sbatch: error: Batch job submission failed: Job dependency problem (-: Finished adding all jobs... Now is a good time to get that cup of coffee... Last job id
Could you also share the file to help me out with these bugs? Please send me the message by [email protected].
@thangnx1012 Were you able to resolve that "Job dependency problem" error? It has been an absolute nightmare getting this working on our Slurm managed HPC. That, and this: sbatch: error: Invalid numeric value "-d" for --cpus-per-task. (which I suspect is another whitespace issue resulting from my edits) are all that are left to resolve on for the main juicer.sh script (before we need to debug all the companion scripts)