我用LEVIR数据集训练的模型,在测试的时候报错:
Traceback (most recent call last):
File "eval_cd.py", line 58, in
main()
File "eval_cd.py", line 54, in main
model.eval_models(checkpoint_name=args.checkpoint_name)
File "/tmp/pycharm_project_668/models/evaluator.py", line 158, in eval_models
self._load_checkpoint(checkpoint_name)
File "/tmp/pycharm_project_668/models/evaluator.py", line 70, in _load_checkpoint
self.net_G.load_state_dict(checkpoint['model_G_state_dict'])
File "/root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1497, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for BASE_Transformer:
size mismatch for transformer_decoder.layers.0.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.0.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.0.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.0.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.1.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.1.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.1.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.1.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.2.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.2.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.2.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.2.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.3.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.3.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.3.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.3.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.4.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.4.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.4.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.4.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.5.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.5.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.5.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.5.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.6.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.6.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.6.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.6.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
size mismatch for transformer_decoder.layers.7.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.7.0.fn.fn.to_k.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.7.0.fn.fn.to_v.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]).
size mismatch for transformer_decoder.layers.7.0.fn.fn.to_out.0.weight: copying a param with shape torch.Size([32, 512]) from checkpoint, the shape in current model is torch.Size([32, 64]).
数据集的size是256*256的,训练和测试都是按照readme里写的方法,有人知道这个问题怎么解决吗?
经过不断调试和测试,花了一天的时间才找到问题的所在。。。因为可能是作者故意留下的问题,所以这里就先不写出解决办法了,需要解决办法的可以私信我取得联系,9506#163.com
经过不断调试和测试,花了一天的时间才找到问题的所在。。。因为可能是作者故意留下的问题,所以这里就先不写出解决办法了,需要解决办法的可以私信我取得联系,9506#163.com
联系您啦!
加你了 可视化的问题我没注意测试 可能我也帮不了你什么
秋月的私语
@.***
------------------ 原始邮件 ------------------
发件人: @.>;
发送时间: 2022年9月12日(星期一) 中午11:16
收件人: @.>;
抄送: @.>; @.>;
主题: Re: [justchenhao/BIT_CD] size mismatch for transformer_decoder.layers.0.0.fn.fn.to_q.weight: copying a param with shape torch.Size([512, 32]) from checkpoint, the shape in current model is torch.Size([64, 32]). (Issue #27)
已经回复,请查收邮件
你好,我的QQ是1121399040,想请教一下可视化的问题。
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>
他把那个测试模型改了,测试模型换成和训练模型一样就行,改了半天,我以为是模型出错了呢,打印保存模型和现有模型才发现这个问题
他把那个测试模型改了,测试模型换成和训练模型一样就行,改了半天,我以为是模型出错了呢,打印保存模型和现有模型才发现这个问题
十分感谢!这个问题已经解决啦
他把那个测试模型改了,测试模型换成和训练模型一样就行,改了半天,我以为是模型出错了呢,打印保存模型和现有模型才发现这个问题
是的 就是这个问题 我当时也是搞了半天。。
他把那个测试模型改了,测试模型换成和训练模型一样就行,改了半天,我以为是模型出错了呢,打印保存模型和现有模型才发现这个问题
非常感谢!问题已经解决了