pytorch-seq2seq icon indicating copy to clipboard operation
pytorch-seq2seq copied to clipboard

Error when resuming with optimizer and scheduler are set

Open w121211 opened this issue 7 years ago • 1 comments

Using the ./example/sample.py, with optimizer part unmarked.

optimizer = Optimizer(torch.optim.Adam(seq2seq.parameters()), max_grad_norm=5)
scheduler = StepLR(optimizer.optimizer, 1)
optimizer.set_scheduler(scheduler)

First run for a while to collect checkpoints, then run with '--resume'. The error pops out as below:

python examples/sample.py --train_path $TRAIN_PATH --dev_path $DEV_PATH --resume
2017-11-05 14:54:53,118 root         INFO     Namespace(dev_path='data/toy_reverse/dev/data.txt', expt_dir='./experiment', load_checkpoint=None, log_level='info', resume=True, train_path='data/toy_reverse/train/data.txt')
Loading checkpoints from ~/pytorch-seq2seq-master/./experiment/checkpoints/2017_11_05_14_54_09
Traceback (most recent call last):
  File "examples/sample.py", line 129, in <module>
    resume=opt.resume)
  File "~/miniconda3/envs/ape/lib/python3.6/site-packages/seq2seq-0.1.4-py3.6.egg/seq2seq/trainer/supervised_trainer.py", line 169, in train
TypeError: __init__() got an unexpected keyword argument 'initial_lr'

w121211 avatar Nov 05 '17 07:11 w121211

Thanks for bringing it up! I'll look into it.

kylegao91 avatar Nov 06 '17 23:11 kylegao91