ray
ray copied to clipboard
[RayTrain] Checkpoint API to recover from checkpoint from previous runs
Description
Steps to reproduce
There are examples that illustrate checkpointing and recovering from checkpointing in the Ray training frameworks. One such example illustrates how to configure checkpointing to a pytorch training job.
1. Trigger the training RayJob
kubectl apply rayjob.yaml
2. Kill the head pod
Let the training job make a couple of checkpoints and then kill the head pod.
kubectl delete pods rayjob-sample-raycluster-lv85g-head-tbfwq
3. The new driver ignores the checkpoint
The current driver pod errors out and a new driver pod gets created. The new driver pod runs the training job again from scratch ignoring the checkpoints produced in the last run.
Hacky Fix
To overcome this problem, we have to write a function with a tightly coupled logic. For example, look at the function findLatestCheckpoint in this job definition.
Use case
It would be great if we have an API that we can call and get the latest checkpoint location for the previous iteration of the given run.
This relates to an issue in Kuberay: https://github.com/ray-project/kuberay/issues/2155
cc: @kevin85421
It would be great if we have an API that we can call and get the latest checkpoint location for the previous iteration of the given run.
Do you mean a Ray Train API? It makes sense to me. cc @matthewdeng
It would be great if we have an API that we can call and get the latest checkpoint location for the previous iteration of the given run.
Do you mean a Ray Train API? It makes sense to me. cc @matthewdeng
@matthewdeng @kevin85421 , yes. Ray Train API that helps to find the last successful checkpoint.
@sathyanarays is this the API you'd be looking for?
@matthewdeng , yes. This works. Thank you for the pointers.