Aaron Rizzuto

Results 10 comments of Aaron Rizzuto

After some more digging, on line 62-63 of lightning/pytorch/core/saving.py ``` with pl_legacy_patch(): checkpoint = pl_load(checkpoint_path, map_location=map_location) ``` with map_location = torch.device('cpu'), the returned checkpoint has: ``` >>> checkpoint['hyper_parameters']['loss'].device device(type='cuda', index=0)...

My current workaround is after training on the gpus, I load the trained model on rank 0, forcing everything to CPU, then pickle the model. Loading the pickled model on...

Hi @saurabh-sh704 , here's the block that does everything, creation of model and trainer are abstracted away here. Also, I'm not using bestTFT, I have a fixed process for training...

Just to be clear @mrgreen3325, once you've created the pickled TFT (tft.pkl file) you load it back in like: with open('tft.pkl','rb') as f: model = pickle.load(f)

> > Hi [@mrgreen3325](https://github.com/mrgreen3325) `tft.pkl` is a pickle object not a checkpoint. Checkpoints are `.ckpt` files automatically created during training. by defaul they're in a `lightning_logs` directory and have names...

> Thanks for reply. May I know that how to set the tsdataset = TimeSeriesDataSet()? Since the data are encoded during training, is it need to encoded in the same...

Hi @fkiraly, totally understand, I am digging into the cogs more than intended. Having said that, keeping the setting in the setter function is also generally common convention in python...

> Interesting - would you be able to share in a draft PR so we can analyze the code and suggest ways to include it in the rework? I'll try...

I'm not running multiprocessing or anything like that. Looking back I see my description is a bit unclear, I'll try make it more direct by writing a step by step:...