PyTorch-VAE
PyTorch-VAE copied to clipboard
f'The provided lr scheduler "{scheduler}" is invalid'
my torch vision is 2.01,I don't konw how to solve this error
me too
Hi @luoclab , @dai-jiuhun , Could you be more specific? What model did you try, did you change some piece of code, what are the parameters you chose etc. In particular, of course, the name of the scheduler you try to use :)
Hey @MisterBourbaki
No changes to the code/YAML files
I tried running VanillaVAE with torch=2.2 with CUDA 11.8 and got the following error while trying to execute run.py.
[rank0]: File "/home/arpan/ddpm/PyTorch-VAE/run.py", line 63, in <module>
[rank0]: runner.fit(experiment, datamodule=data)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 737, in fit
[rank0]: self._call_and_handle_interrupt(
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt
[rank0]: return trainer_fn(*args, **kwargs)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 772, in _fit_impl
[rank0]: self._run(model, ckpt_path=ckpt_path)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1140, in _run
[rank0]: self.accelerator.setup(self)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/accelerators/gpu.py", line 46, in setup
[rank0]: return super().setup(trainer)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/accelerators/accelerator.py", line 93, in setup
[rank0]: self.setup_optimizers(trainer)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/accelerators/accelerator.py", line 351, in setup_optimizers
[rank0]: optimizers, lr_schedulers, optimizer_frequencies = self.training_type_plugin.init_optimizers(
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 245, in init_optimizers
[rank0]: return trainer.init_optimizers(model)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/optimizers.py", line 44, in init_optimizers
[rank0]: lr_schedulers = self._configure_schedulers(lr_schedulers, monitor, not pl_module.automatic_optimization)
[rank0]: File "/home/arpan/miniconda3/envs/torch-2.2/lib/python3.10/site-packages/pytorch_lightning/trainer/optimizers.py", line 192, in _configure_schedulers
[rank0]: raise ValueError(f'The provided lr scheduler "{scheduler}" is invalid')
[rank0]: ValueError: The provided lr scheduler "<torch.optim.lr_scheduler.ExponentialLR object at 0x7f63e8121ed0>" is invalid
Any solutions?
Hi @arpu-nagar , I just had a look at the issue, and I think it comes from the old age of the code. This repo is great, but very old in terms of code. In particular, torch and in particular lightning's API have change a lot since then.
So, to avoid spending to much time finding the right way to change a few lines of code here and there, I think it is best to craft from scratch a training pipeline using Lightning. They have really good tutorials :)
And if I may, I am trying to rebuild this repo in a more modern way here . It is still a work in progress, but I hope to catch on quick.
@luoclab @dai-jiuhun @arpu-nagar
This problem is from the version of pytorch-lightning, which changed apis. I've optimised the code to support the latest Pytorch version 2.2.x, you could have a try. https://github.com/ray-ruisun/PyTorch-VAE
The problem is that the code is for older versions but the requirements.txt installs the newest version. I used these installations to get past the error: !pip install torch==1.13.1 !pip install torchvision==0.14.1