batchtools icon indicating copy to clipboard operation
batchtools copied to clipboard

More flexible Job listing/killing for Slurm

Open ja-thomas opened this issue 6 years ago • 14 comments

We frequently change clusters/partition on our HPC and setting the clusters in makeClusterFunctionSlurm() is not really practical (multiple users are linking to the same template/config file).

So we handle the clusters/partitions via resources. To make listing/killing of the jobs possible we need to set the squeue arguments in the functions accordingly.

This is not really nice, but I can't think of another solution. As far as I know you can't change the clusters argument of makeClusterFunctionSlurm not after or while creating the registry.

ja-thomas avatar Feb 05 '18 15:02 ja-thomas

Coverage Status

Coverage remained the same at 93.744% when pulling eda37eaee86ef85f234d1d510bff99be89790dfd on ja-thomas:master into 5c008ee9c8ef54bac0be2e373da74147d51ee657 on mllg:master.

coveralls avatar Feb 05 '18 15:02 coveralls

Can you call squeue without specifying --clusters? Can there be duplicated job.ids if you query multiple clusters?

mllg avatar Feb 05 '18 16:02 mllg

no, squeue without the clusters argument will always return no jobs (the partition argument is optional and only used if a specific partition is actually used, which means there is a default partition but no default cluster).

There are no duplicated job ids as far as I know. I can double check, but until now the job.id was always a unique identifier over all clusters

ja-thomas avatar Feb 05 '18 17:02 ja-thomas

How about this approach:

  • You specify a comma separated list of clusters to the constructor
  • For submitJobs you select one of the clusters via a resource
  • The job listing functions will iterate over all available clusters and returns the union of all job ids. Job ids are later matched against the job ids in the data base, so it is okay if the cluster functions return a superset here. But duplicated ids would lead to inconsistencies.

mllg avatar Feb 05 '18 17:02 mllg

Ah wait, this does not work for killJobs() 😞

mllg avatar Feb 05 '18 17:02 mllg

@mllg why dont you simply expose the "args" from listJobs in the constructor, with your settings as default?

and then users can overwrite this flexibly? isnt that the normal trick? and this changes nothing for anybody else or the internal code?

berndbischl avatar Mar 14 '18 17:03 berndbischl

this here:

 listJobsQueued = function(reg) {
 args = c("-h", "-o %i", "-u $USER", "-t PD", sprintf("--clusters=%s", clusters))

just expose this as args.listjobsqueued (or whatever), with the string as a default?

berndbischl avatar Mar 14 '18 17:03 berndbischl

https://github.com/mllg/batchtools/pull/179

Can you please comment if

  1. This is now flexible enough for you guys
  2. if we still need the clusters argument
  3. if this PR is now obsolete

mllg avatar Mar 15 '18 10:03 mllg

I don't think this helps. The problem is that the args are evaluated (at least when we have them optional with sprintf) at creation time of the clusterFunctions and not when they are actually called.

I hope we don't need the clusters argument anymore if we get that to work.

I think I'll take this rather ugly fix here and keep them as clusterFunctionsSlurLRZ or whatever in the config repository for our project on the lrz. Since all cluster users are linking against my batchtools.conf file anyways...

The perfect solution for us would be that clusters + partitions are resources that can be set on a job level (which is already possible, I think) and have the listing/killing calls take the values from there

ja-thomas avatar Mar 16 '18 09:03 ja-thomas

#180 ?

mllg avatar Mar 16 '18 10:03 mllg

I could do the same thing for partitions, but I really don't know what I'm doing. :confused:

mllg avatar Mar 16 '18 10:03 mllg

This does not solve the original cluster/partition issue, but @berndbischl's suggestion of exposing the arguments would solve a problem I am encountering where all of my SLURM jobs show up as expired until done. My computing cluster has its own version of squeue (see here), but it only recognizes --noheader and not -h as assumed by the listJobs functions. Allowing users to tweak the listJobsQueued and listJobsRunning args would make it easier to use with nonstandard SLURM configurations.

lcomm avatar Mar 21 '18 15:03 lcomm

@lcomm The arguments will be exported in the next version of batchtools, I'm just waiting for some feedback on #180 before exposing the args.

@ja-thomas @berndbischl

mllg avatar Mar 22 '18 13:03 mllg

Ok it looks like --no-header is supported now by rc-squeue. I've changed the Slurm cluster functions to always use the longer command line arguments nevertheless.

mllg avatar Mar 22 '18 13:03 mllg