PSMNet
PSMNet copied to clipboard
您好,想向您请教一些关于Test_img.py的问题?
1、学生最近正尝试复现您的PSMNET模型,当我使用您更新过的Test_img.py时,出现如下错误, File "/home/mist/PSMNet-master/models/stackhourglass.py", line 116, in forward refimg_fea.size()[3]*1).zero_()).cuda() TypeError: new(): argument 'size' must be tuple of ints, but found element of type float at pos 3
2、当我使用您的之前的pretrained_model_KITTI2015.tar时总是出现如下问题,让我不能使用。不知tar文件的使用方式是否正确。 model.load_state_dict(state_dict['state_dict']) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 845, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for DataParallel: Missing key(s) in state_dict: "module.dres2.0.0.weight", "module.dres2.0.1.weight", "module.dres2.0.1.bias", "module.dres2.0.1.running_mean", "module.dres2.0.1.running_var", "module.dres2.2.0.weight", "module.dres2.2.1.weight", 如果您能看到我的问题,有空的话,帮学生解答一下谢谢!!
@williamma111 您好 1.的問題我已更新 您可以重新git 2.的問題是model配置與model weight不符,您是否有改動model?,你可以重新git試試
@williamma111 您好 1.的問題我已更新 您可以重新git 2.的問題是model配置與model weight不符,您是否有改動model?,你可以重新git試試
1、非常感谢您的及时回复,这加快了我学习的进度,问题一学生已经解决。
2、问题二我出错的原因是:我尝试使用预训练模型使用basic进行训练(显然不可行,预训练模型是针对stackhourglass模块的),同时我发现如果想要在python 3.6+使用basic模块的话,仍需要将basic按照您stackhourglass模块的修改方式修改。
所以未使用预训练的basic模块结果如下:

我想问一下问题一是怎么解决的,我也遇到相同的问题,求解答
Hey @mirrorplus123 ,
I looked at the code and I suppose the issue is with self.maxdisp/4 in the models.
If you may check lines 109-112 in models/stackhourglass.py, you might see that the authors used floor division self.maxdisp//4, instead of self.maxdisp/4 which is automatically interpreted as float.
cost = Variable(torch.FloatTensor(refimg_fea.size()[0], refimg_fea.size()[1]*2, self.maxdisp//4, refimg_fea.size()[2], refimg_fea.size()[3]).zero_()).cuda()
for i in range(self.maxdisp//4):
Although it seems that in models/basic.py lines 65-68 this still hasn't been changed
cost = Variable(torch.FloatTensor(refimg_fea.size()[0], refimg_fea.size()[1]*2, self.maxdisp/4, refimg_fea.size()[2], refimg_fea.size()[3]).zero_(), volatile= not self.training).cuda()
for i in range(self.maxdisp/4):
so you might want to try changing this to floor division yourself and see if this might resolve your issue.
Best, Elise