MWPToolkit icon indicating copy to clipboard operation
MWPToolkit copied to clipboard

Can Not Use Cuda When Running K-fold Cross Validation (bug in v0.0.6)

Open LYH-YF opened this issue 1 year ago • 0 comments

When use a GPU to train any model with k-fold cross validation, it seems all right when running the first fold and starts to train model slowly when running the second fold. Actually, the GPU is not used to train model. It is caused by the code of saving checkpoint. All the parameters(parameters in config object) are saved in a json file when saving checkpoint. The important is, config['device']=torch.device('cuda') can't be parsed to json format. However, this parameter is deleted directly from config object. So when running another fold, config['device'] can't be found, so the model is not on the GPU.

LYH-YF avatar Aug 07 '22 03:08 LYH-YF