fastNLP
fastNLP copied to clipboard
fastNLP.core.utils._CheckError
复现falt,fastnlp用0.5.0版本的,python3.8,torch1.7,ubuntu
出现如下错误:
Epoch 1/100: 1%|▌ | 955/95600 [01:01<1:24:04, 18.76it/s, loss:56.88514]/home/ai998/.conda/envs/nlp/lib/python3.7/site-packages/torch/optim/lr_scheduler.py:156: UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose.
warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)
Traceback (most recent call last):
File "flat_main.py", line 801, in LossInForward.get_loss(self, **kwargs)
missing param: ['loss(assign to loss in LossInForward']
没想明白怎么loss就丢失了,请问怎么解决
看报错是由于model返回的dict中没有loss。
我也报了这个错,在每个epoch开始前加上self.model.train(),就跑通了
这样话推测可能是由于代码forward中有使用self.training这个属性来判断当前是否是inference,如果是self.training为True和为False的时候,走的逻辑不一样。而手动调用self.model.train()应该是将self.training设置为True了。