examples icon indicating copy to clipboard operation
examples copied to clipboard

Proper Signal Handling for Distributed Training

Open Unturned3 opened this issue 5 months ago • 0 comments

Would it be possible for the devs to write a demo for properly handling OS signals in a distributed training setup that uses torchrun, e.g. single-node multi-GPU DDP? I could not find any documentation on this issue, and the only discussion thread I found on the forum seems inconclusive too.

I'd like to be able to gracefully handle SIGINT, SIGTERM, or SIGUSR1 (sent by Slurm prior to job preemption), and perform actions such as saving checkpoints, shutting down loggers, etc. I tried to roll my own signal handling solution but could not get it to work robustly (or work at all); there were too many strange intertwined errors for me to decipher what's going on.

Any help would be greatly appreciated!

Unturned3 avatar May 26 '25 18:05 Unturned3