flux-core icon indicating copy to clipboard operation
flux-core copied to clipboard

WIP: job-manager: add support for housekeeping scripts with partial release of resources

Open garlick opened this issue 1 year ago • 15 comments

This is a prototype of an idea @grondo and I were discussing last week after someone was complaining about a large job stuck in CLEANUP state while the epilog ran (reminding us with a pang of guilt that the lack of "partial release" support is going to be an ongoing source of sadness). This introduces a new type of epilog script called "housekeeping" that runs after the one we have now. There are two main features here

  1. the job can reach INACTIVE state before housekeeping begins. This implies that the job no longer shows up in the active job list, and that flux-accounting stops accounting for the resources against the user.
  2. if the scheduler supports it, resources can be freed to the scheduler in parts as housekeeping completes on each execution target

sched-simple supports (2) above. Fluxion does not, yet. See flux-framework/flux-sched#1151.

The idea was that maybe we could move some things that take a long time (like ansible updates) and that are not job-specific, from the epilog to housekeeping. In addition, working on this will remove some barriers to implementing partial release of resources from jobs, which is more involved, but the ultimate goal.

One loose end is the RFC 27 hello request. Currently an R fragment cannot be passed back that way. For now, if a job's allocation is partially freed when the scheduler is reloaded, it is logged at LOG_ERR and not included in the list of pre-allocated jobs. We accept that we might schedule a job on a node before housekeeping is finished when the scheduler is reloaded (hopefully rare). More design work required for RFC 27.

Another is tooling. flux resource list shows resources in housekeeping as allocated. It would be nice to know at a glance which nodes are tied up in housekeeping and maybe provide other information that would add transparency and improve how the admins interact with the machine.

~This is currently based on #5796 because there was a bit of overlap in the code, but conceptually they are separate features. At the time this PR was posted, the new work is the top 4 commits.~

I probably jumped the gun going this far with this without getting some feedback on the approach. I'll just say it's only a couple days work and I learned a fair bit about the problem space so we can throw this out and start over if it ends up being off base.

garlick avatar Mar 21 '24 20:03 garlick

rebased on current master

garlick avatar Apr 03 '24 00:04 garlick

Possibly this is going too far, but to reuse as much tooling as possible, it would be nice if housekeeping "jobs" appeared as a real jobs that you could view with flux jobs, that produce an eventlog for debugging, that could be recovered on restart, etc.

I wonder if we could revive some of the "fastrun" experiment e.g. 9f2a33873d8c7ad200a1ece7a1194333b155fc30 to allow the job manager to start instance owner jobs on its own. Then start one of these jobs for housekeeping each time a regular job completes. Bypass the exec system in favor of the direct, stripped down approach proposed here (no shell), and implement partial release using resource-update so flux jobs would list housekeeping jobs:

$ flux jobs
       JOBID USER     NAME       ST NTASKS NNODES     TIME INFO
   ƒ26aXQeNs flux     housekeep+ R       2      2     3.5h elcap[3423,3499]

and it would be obvious which nodes are stuck, when that happens. The eventlog could provide a nice record of how nodes behaved.

garlick avatar Apr 03 '24 13:04 garlick

That's a cool idea! While there would be some interesting benefits, I did consider some issues:

Would we end up in the same place because jobs don't currently support partial release? I.e. we'd have to solve partial release and if we did that we could just use traditional epilogs.

Also, I wonder what would actually be in the eventlog for these jobs, since there is no scheduling and no job shell? Would it just be a start, finish, and clean? Without a job shell we wouldn't capture the housekeeping job's output in the traditional way.

The housekeeping jobs would have to be special cased to avoid running the job prolog and also spawning housekeeping work themselves.

Not to say these issues can't be overcome, but I thought I'd bring them up.

grondo avatar Apr 03 '24 14:04 grondo

Hmm, great points to ponder.

I might be trivializing, but I thought partial release from the housekeeping exec to the scheduler could be implemented as it is here, except we'd support resource-update to show what's been released to the tools.

I guess that implies some sort of "exec bypass" in the job manager.

Perhaps in the spirit of prototyping I could try to tack something on here as a proof of concept if the idea isn't too outlandish.

garlick avatar Apr 03 '24 14:04 garlick

As a simpler, though less interesting, alternative, we could add a new resource "state" like allocated or free. I'm not sure I like "houskeeping" but maybe call it maint or service mode for now, since housekeeping is kind of like scheduled maintenance.

The job manager could just keep the set of resources in this state and send that set along in the job-manager.resource-status response with the allocated resources, and flux resource list could report nodes in this state.

The one thing missing here that's really nice in using the job abstraction (or even the traditional approach of keeping jobs in CLEANUP state) is that there's no tracking of how long resources have been doing housekeeping/maintenance. Probably would need some way to provide that. :thinking:

grondo avatar Apr 03 '24 14:04 grondo

Perhaps in the spirit of prototyping I could try to tack something on here as a proof of concept if the idea isn't too outlandish.

It does not seem too outlandish for a prototype. My main worry is having two classes of jobs, and the fallout this could cause among all the tools. However, I do see why tracking these things as jobs is compelling.

grondo avatar Apr 03 '24 15:04 grondo

I might be trivializing, but I thought partial release from the housekeeping exec to the scheduler could be implemented as it is here, except we'd support resource-update to show what's been released to the tools.

This might be good to do anyway since there's interest in allowing jobs to release resources while still running. We'll have to think through how tools and the job record reflect a changing resource set.

grondo avatar Apr 03 '24 15:04 grondo

Mentioning @ryanday36.

Ryan, this is a new epilog like capability that runs after each job but does not prevent the job from completing. It also supports "partial release" whereby any stragglers are returned to the scheduler independently. The main topic of discussion is tooling. How do you tell that housekeeping is still running on node(s) and for how long?

Some other aspects are:

  • stdio is not captured and nodes are not drained when the script fails. I figured the script itself could arrange for either of those things. For example calling flux logger or flux resource drain) but that could be changed.
  • right now there is a corner case where a restart of Flux could lose track of running housekeeping work, and new jobs could start on a node before housekeeping is finished. Presumably we'll need to address that in some way.

garlick avatar Apr 25 '24 14:04 garlick

This seems like an interesting approach. As far as tooling, if you do list the housekeeping jobs are "real" jobs, I like that the they are listed as belonging to the flux user rather than the original job user. Slurm jobs in "completing" states that list the submitting user seem to be a perpetual source of confusion for our users. If you do list them as "real" jobs though, it would be good if they had some state other than "R" to avoid confusion.

I think that using flux logger and flux resource drain in the housekeeping jobs would be acceptable, although it would still be good if the node got drained if a housekeeping job exited with anything other than success.

I agree that yes, it would be good to track nodes that have unfinished housekeeping.

My last thought is a sort of meta-concern. It seems like whenever we talk about the current prolog / epilog it's described as something that you're not quite happy with. Would this be further cementing that implementation, or would it be separating concerns in a way that would allow you to make changes to the current prolog / epilog more easily?

ryanday36 avatar Apr 25 '24 17:04 ryanday36

I think that using flux logger and flux resource drain in the housekeeping jobs would be acceptable

My worry is that unanticipated errors from a script or set or scripts run in this way will be impossible to diagnose after the fact. You can easily code your housekeeping script to do

  try_something || flux logger "something failed"

Are we going to suggest authors do that on every line?

What if your python script throws an unexpected exception, are we going to suggest cleanup scripts are wrapped in a large try/expect block so that the exception can be logged with flux logger? What if the exception is that the script tried to connect to flux and somehow failed?

If we represent these things as jobs, we have a nice way to capture output from jobs, so maybe we could just do that? Is the worry that it is just too much work, or that it will double the KVS usage since there will be one housekeeping job per actual job?

grondo avatar Apr 25 '24 20:04 grondo

Good points. I guess I was thinking of the runner script that they have now that could log errors from the scripts that it runs, but it's actually easy enough to capture output if we need it. The downside is if we capture it in the flux log like we do now, it can be a source of log noise that pushes out the interesting stuff. I think its very easy to add.

garlick avatar Apr 25 '24 20:04 garlick

If these housekeeping scripts are emulated like jobs could we just log the errors in an output eventlog? Or are they only jobs as far as job-list is concerned, so all other job related commands are just going to say "no such jobid"?

grondo avatar Apr 25 '24 20:04 grondo

It seems like whenever we talk about the current prolog / epilog it's described as something that you're not quite happy with. Would this be further cementing that implementation, or would it be separating concerns in a way that would allow you to make changes to the current prolog / epilog more easily?

The problem with the current way the per-node epilog is managed is that it uses a facility that was designed for things that need to run once in a central location after each job to remotely execute the epilog using a specialized script. This was meant to be a stopgap until we could distribute the job execution system, which would then be able to manage the epilog scripts directly from each broker.

However, in my opinion, the solution proposed here is superior to even what we had planned. The benefits I see are:

  • The administrative epilog is separated from the user's job, both for runtime accounting and also reporting as you note
  • The current style epilog could still be used during the transition, or for than anything that should keep resources associated with the user's job until completion (i.e. both epilog styles can coeexist)
  • It doesn't require the job execution system to be distributed
  • Implementation of how the epilog or housekeeping scripts are run can be improved over time (not dependent on job execution system)
  • More flexible: you could presumably schedule maintenance or other work not related to jobs as 'housekeeping'

The only drawback I can see is perhaps more muddled reporting -- it isn't clear what user job spawned a housekeeping job (if that is the way it is going to be reported). But that could easily be queried, or could even be an annotation come to think of it. Maybe that is actually a benefit.

Also, reporting housekeeping work as jobs will (I assume?) report the nodes as "allocated" instead of some other state, and when queries are made for jobs there will always be some housekeeping jobs reported that would likely have to be filtered out.

grondo avatar Apr 25 '24 20:04 grondo

Cool. Thanks for the discussion on the benefits @grondo. That makes me feel better about it.

I do think that it would be good to be able to correlate housekeeping jobs with the user job that launched them. It's good to be able to find out if one user is consistently managing to leave nodes in bad states.

I'd actually be interested in being able to track the time spent in housekeeping separately from the time running the user jobs. It could be a good tool for keeping run_ansible and other epilog things from getting out of hand. That said, for most reporting, we'd probably lump them in as 'allocated'. I believe that's how time spent in 'completing' currently gets counted in Slurm.

Lastly, good points on the logging discussion. Things like run_ansible do generate a ton of output that would probably be good to keep out of the flux log. It would be good to be able to get to it if the housekeeping job fails though, so a job specific output event log that could be referenced if the job fails seems like it would be useful.

ryanday36 avatar Apr 25 '24 21:04 ryanday36

Rebased on current master with no other changes.

garlick avatar Jun 06 '24 14:06 garlick

Changes in that last push were

  • convert to bulk_exec
  • add systemd unit for the housekeeping script

Per offline discussion, next steps are

  • add query RPC to allow a command line tool to list nodes in housekeeping, by job id
  • add cancel RPC to allow a set of ranks to be sent a signal
  • drain nodes on failure

garlick avatar Jun 17 '24 21:06 garlick

I noticed the environment was not like the prolog's so fixed that.

garlick avatar Jun 17 '24 22:06 garlick

flux module stats job-manager how returns a housekeeping object that lists running jobs and the most recent status per node.

I added a housekeeping-kill RPC that accepts a jobid or rank idset or nothing (meaning all) which can send a signal to any running housekeeping tasks. Sending a SIGTERM when housekeeping is configured to use systemd causes systemctl stop to be run.

FWIW I added a default 30m timeout to the systemd unit file and also code to drain the node when the unit start fails.

garlick avatar Jun 18 '24 22:06 garlick

FWIW I added a default 30m timeout to the systemd unit file and also code to drain the node when the unit start fails.

Great! Just FYI the timeout the admins currently have on our site epilog is 24h. However, 30m and drain the node sounds reasonable to me. What happens when the timeout is reached? Does systemd kill the unit? (I wonder if admins would want to keep a hanging process around or not have a slow ansible run killed in the middle of its run, though)

grondo avatar Jun 18 '24 23:06 grondo

What happens when the timeout is reached? Does systemd kill the unit?

I need to test it but I think systemd stops the unit after the timeout is reached, and that should drain the node like any other failure.

garlick avatar Jun 18 '24 23:06 garlick

Er apparently I was a little confused about how/whether jansson handles sparse arrays. Going to push a fix shortly.

garlick avatar Jun 19 '24 00:06 garlick

I'm ending the day without figuring out why this test fails always in CI and never on my desktop. The tests that are failing involve a housekeeping script that creates a file with a rank extension. The tests check for the existence of these files after housekeeping should have run. They never exist in CI. The always exist on my system.

The current tests print the paths as debug and I've confirmed there are no mismatches between create and check directories. The synchronization would appear to be solid - as explained in inline comments, we wait for flux module stats to show zero housekeeping jobs before checking for the files. I've been over the code looking for anything wrong with that assumption and haven't found anything.

Anyway, maybe it'll come to me in dreams tonight.

garlick avatar Jun 20 '24 00:06 garlick

Couple of quick observations from testing:

job-manager.housekeeping-kill doesn't work when housekeeping is run by the imp (the common case). I thought I had tested earlier that the imp hung around and forwarded signals, and that it would accept signals from the flux user but the imp does not seem to hang around after all.

When the 30m systemd unit timeout occurs, the messages we get could be more descriptive. Drain message:

  TIME         STATE    REASON                         NODELIST
  Jun20 10:18  drained  housekeeping killed TERM       picl7

Log message:

job-manager[0]: housekeeping: picl7 (rank 7) ƒ54iW1HUTwd: nonzero exit code

garlick avatar Jun 20 '24 18:06 garlick

thought I had tested earlier that the imp hung around and forwarded signals, and that it would accept signals from the flux user but the imp does not seem to hang around after all.

Could this be because the IMP runs systemctl start flux-housekeeping.JOBID which then exits once the unit is in starting state? The IMP only sticks around as long as its child does (I think). Is there some way for systemctl to stay around until the unit is inactive?

When the 30m systemd unit timeout occurs, the messages we get could be more descriptive. Drain message:

Do we need the default 30m time limit? Should nodes just keep any stuck housekeeping script around until an admin can intervene (and actually determine which of possibly many housekeeping commands was stuck)?

grondo avatar Jun 20 '24 18:06 grondo

With "oneshot", systemctl start housekeeping does stick around until the script completes. What I observed was the script that is the target of the imp run command (flux-run-housekeeping) was the direct descendant of the broker

$ pstree -T 28259
flux-broker-7───flux-run-housek───systemctl

p.s. I just recreated that - this is before any signal is sent.

garlick avatar Jun 20 '24 18:06 garlick

Hm, let me remind myself how the IMP persistence works real quick.

grondo avatar Jun 20 '24 18:06 grondo

Oh, it is flux-imp exec that lingers, not flux-imp run. You could try flux-imp kill in this situation if the target PID is in a cgroup owned by the flux user.

grondo avatar Jun 20 '24 18:06 grondo

Do we need the default 30m time limit? Should nodes just keep any stuck housekeeping script around until an admin can intervene

Works for me. The admins can always add one via systemd override if desired.

garlick avatar Jun 20 '24 18:06 garlick

Oh, it is flux-imp exec that lingers, not flux-imp run

Ah thanks. I think I got confused over in #6040 - with the prolog/epilog, flux perilog-run invokes flux exec --with-imp which ensures if the remote process is started with imp run, it is killed with imp kill.

garlick avatar Jun 20 '24 19:06 garlick

I think there's a chance this approach may rule out or make more difficult some of the ideal rabbit-y behavior.

@grondo mentioned that with this PR,

Jobs could now get a clean event immediately after they finish (unless some other epilog-start events are issued). We can therefore move any commands that by default wait for the finish event to wait for the clean event to solve and older issue you'd brought up (https://github.com/flux-framework/flux-coral2/issues/137).

The flux-coral2 software issues an epilog-start event at the moment, to hold jobs while 1) compute nodes unmount their file systems and then 2) data is transferred from the rabbit nodes to the backing file system. We don't want to have users think their jobs are complete until that's all done. So making flux job attach wait for clean is desirable as far as that goes, and since this PR I think makes that possible, that's great.

However, 1) and 2) could potentially take a long time. And as soon as any individual compute node unmounts its file systems, it could potentially be released back into the resource pool. But it sounds like this PR is building up the assumption that partial release would only happen after housekeeping, which in turn only happens after epilog-finish.

Unless I have misunderstood and there might still be a way to release resources during an epilog? Or perhaps it's just worth forgetting about the potential to release nodes after they unmount their file systems and before the epilog completes.

jameshcorbett avatar Jun 25 '24 01:06 jameshcorbett