How to run a job with 'qsub' arguments or script flags
When I run a script on an HPC with a scheduler such as SGE, my script might contain flags like these:
#!/bin/bash
#$ -pe openmpi 32
#$ -A TensorFlow
#$ -N rqsub_tile
#$ -cwd
#$ -S /bin/bash
#$ -q gpu0.q
#$ -l excl=true
run_script.sh
Or, I might run a qsub command like this:
$ qsub -wd $PWD -o :${qsub_logdir}/ -e :${qsub_logdir}/ -j y -N "$job_name" -pe threaded 6-18 -l mem_free=10G -l mem_token=10G run_script.sh
I've spent a lot of time reading the docs, running through the examples, and Google'ing, but I cant find anything that actually shows how to use these parameters with this Python DRMAA library. Is it described somewhere? It sounds like something that might be part of the JobTemplate described here, but the docs do not mention this, or tell you much at all really.
So DRMAA would be used to set the working directory (workingDirectory), stdout/stderr log filenames (outputPath/errorPath ), job name (jobName), and command to be run (remoteCommand). The -j y is redundant for DRMAA; so, would be dropped. The rest don't have a corresponding DRMAA parameter, but can be passed in using nativeSpecification, which is just a catchall string parameter for anything that qsub (or similar on other clusters) would need.
As a simple example, you might find this code interesting. It creates a light CLI around DRMAA and allows submitting jobs without much of the boiler plate that qsub needs.
Thanks. Found a mention of it here:
I have a job script on SGE which has the following line embedded to specify the -pe option: "# -pe smp 8" How can I make DRMAA book the slots in SGE correctly?
If there are platform (SGE) specific things that you want to pass on that are not in the general drmaa interface, you can use the nativeSpecification field to pass any string that you like. E.g. job.nativeSpecification = "-pe smp 8"
Glad to hear that helped. Good to close?