torchsnapshot
torchsnapshot copied to clipboard
Leverage local disk for async snapshot
🚀 The feature
Leverage local disk for async snapshot.
Motivation, pitch
TorchSnapshot supports async snapshot, which allows training to resume before the storage I/O of a snapshot completes. For training workloads that are not storage I/O bound, this results in better resource utilization.
Today the feature is implemented roughly as follows:
- Calculate a RAM budget based on available host resources.
- Pipeline data from GPU -> RAM -> storage while keeping RAM usage under the budget.
- Once all data is either moved to RAM or storage, give the control back to training and continue storage I/O in background.
This works well when host RAM is abundant. However, the smaller the RAM budget, the smaller the benefit async snapshot offers over sync snapshot. In such cases, if the target storage is slow (e.g. cloud storage), async snapshot can benefit from leveraging local disk as a staging area in addition to RAM.
Alternatives
No response
Additional context
No response
/assigntome
@svekars
Thanks for the note on another issue. I didn't look into it carefully. Just like to bring it to your attention that these two links provided in the description are linked to pull requests
page, not issues
page.
Bug fixes in the pytorch/tutorials repo tagged with the docathon-h2-2023 label - see the list repo. Docstring fixes in the pytorch/pytorch repo tagged with the docathon-h2-2023 label - see this list repo.
@svekars
Thanks for the note on another issue. I didn't look into it carefully. Just like to bring it to your attention that these two links provided in the description are linked to
pull requests
page, notissues
page.Bug fixes in the pytorch/tutorials repo tagged with the docathon-h2-2023 label - see the list repo. Docstring fixes in the pytorch/pytorch repo tagged with the docathon-h2-2023 label - see this list repo.
@derrickmo Thanks for pointing this out. We are fixing it now.