rubra icon indicating copy to clipboard operation
rubra copied to clipboard

Support SLURM job scheduler

Open bjpop opened this issue 11 years ago • 2 comments

Add support for SLURM job scheduler.

bjpop avatar Aug 12 '13 00:08 bjpop

I was looking for something similar. Looking through https://github.com/bjpop/rubra/blob/master/rubra/cluster_job.py it doesn't seem like it would be difficult to add SLURM support. I'd be happy to contribute this, but I'm not sure if it's necessary.

Would it provide an increase in performance to use distributed: True with a flag for SLURM, compared to submitting an SBATCH script, with distributed: False and a rubra configuration that matches the SBATCH configuration?

Right now I'm essentially running a bash script via sbatch run.sh:

#!/usr/bin/sh
#SBATCH -n 1
#SBATCH --cpus-per-task=40
#SBATCH --mem=50000 # memory pool for all cores, this is in mb

rubra RedDog2 --config RedDog_config --style run

With the following config:

pipeline = {
    "logDir": "log",
    "logFile": "All_pipeline.log",
    "style": "print",
    "procs": 40,
    "paired": True,
    "verbose": 1,
    "end": ["deleteDir"],
    "force": [],
    "rebuild": "fromstart"
}
stageDefaults = {
    "distributed":    False,
    "walltime":    "01:00:00",
    "memInGB":    50,
    "queue":    None,
    "modules": [
        # Note that these are for Barcoo at VLSCI
        # You will need to change these for distributed (queuing) installation
        "python-gcc/2.7.5",
        "bwa-intel/0.6.2",
        "samtools-intel/1.3.1",
        "bcftools-intel/1.2",
        "eautils-gcc/1.1.2",
        "bowtie2-gcc/2.2.9",
        "fasttree-gcc/2.1.7dp"
    ]
}

This appears to be spawning parallel processes correctly.

nschiraldi avatar Oct 17 '18 14:10 nschiraldi

see https://github.com/katholt/RedDog/issues/58 for answer

d-j-e avatar Oct 17 '18 23:10 d-j-e