AdaptSegNet
AdaptSegNet copied to clipboard
Parameter mismatch when loading VGG ImageNet pre-trained weights
First of all, thanks for sharing the project!
As in the previous issue, the given VGG ImageNet pre-trained weights cannot match the model architecture in ./model/deeplab_vgg.py
I got the same error message as in the issue:
RuntimeError: Error(s) in loading state_dict for DeeplabVGG: Unexpected key(s) in state_dict: "0.weight", "0.bias", "2.weight", "2.bias", "5.weight", "5.bias", "7.weight", "7.bias", "10.weight", "10.bias", "12.weight", "12.bias", "14.weight", "14.bias", "17.weight", "17.bias", "19.weight", "19.bias", "21.weight", "21.bias", "24.weight", "24.bias", "26.weight", "26.bias", "28.weight", "28.bias", "6.weight", "6.bias", "3.weight", "3.bias"
Note that in that issue, alphjheon simply comment out the code of loading weights to avoid the error.
Thanks.
I got a similar error message while loading the baseline Model:
RuntimeError: Error(s) in loading state_dict for ResNetMulti: Missing key(s) in state_dict: "layer6.conv2d_list.0.weight", "layer6.conv2d_list.0.bias", "layer6.conv2d_list.1.weight", "layer6.conv2d_list.1.bias", "layer6.conv2d_list.2.weight", "layer6.conv2d_list.2.bias", "layer6.conv2d_list.3.weight", "layer6.conv2d_list.3.bias".size mismatch for layer5.conv2d_list.0.weight: copying a param with shape torch.Size([19, 2048, 3, 3]) from checkpoint, the shape in current model is torch.Size([19, 1024, 3, 3]).
@JuiChang I get exactly the same result as you. Now you can modify model.load_state_dict(new_params) to model.load_state_dict(new_params, strict=False). Then it works.