anomalib icon indicating copy to clipboard operation
anomalib copied to clipboard

Model Loader Callback

Open shakib-root opened this issue 2 years ago • 2 comments

Is your feature request related to a problem? Please describe.

  • Currently, the Pytorch Lightning(PL) trainer uses a callback LoadModelCallback from anomalib/utils/callbacks/model_loader.py where it uses Pytorch's torch.load function to load the best weights which may cause device-related issues (trained on GPU, testing/predicting on CPU).

Describe the solution you'd like

  • I suggest removing Pytorch's torch.load function from LoadModelCallback and letting the PL trainer handle this issue.
trainer.test(model=model, datamodule=datamodule, ckpt_path='best')    #or
trainer.test(model=model, datamodule=datamodule, ckpt_path='$path_to_the_checkpoint_user_wish_to_test')
trainer.predict(model=model, datamodule=datamodule, ckpt_path='best')    #or
trainer.predict(model=model, datamodule=datamodule, ckpt_path='$path_to_the_checkpoint_user_wish_to_predict')

Reference: [PL docs]

shakib-root avatar Jun 07 '22 07:06 shakib-root

Thanks for your suggestion! This could indeed be a better way of loading the model weights, though we will have to investigate if this would lead to any unwanted/unexpected behavior. We'll have a look and post any findings here.

djdameln avatar Jun 07 '22 08:06 djdameln

Hi @shakib-root, the reason why we used this was because there was a bug in PL model loading in its earlier versions. We were unable to achieve the same performance as training. We could try again now.

samet-akcay avatar Jun 07 '22 08:06 samet-akcay

I am closing this as this change has been merged to the feature branch and will be merged to main soon.

ashwinvaidya17 avatar Dec 29 '22 09:12 ashwinvaidya17