Error while using ddd_ae model
Hello. I'm using the model to denoise depth images obtained from an intel realsense d455 depth camera. The standard ddd model works well, but when using the ddd_ae model, there is a mismatch of filter shapes
weights = torch.load(init, map_location={'cuda:1':'cuda:0'}) Traceback (most recent call last): File "/media/astik/NewVolume1/DeepDepthDenoising/inference.py", line 116, in <module> run_model( File "/media/astik/NewVolume1/DeepDepthDenoising/inference.py", line 60, in run_model utils.init.initialize_weights(model, model_path) File "/media/astik/NewVolume1/DeepDepthDenoising/utils/init.py", line 27, in initialize_weights model.load_state_dict(weights["model_state_dict"]) File "/home/astik/anaconda3/envs/drones/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict raise RuntimeError( RuntimeError: Error(s) in loading state_dict for PartialNet: size mismatch for encoder_conv1_1.conv.weight: copying a param with shape torch.Size([16, 1, 7, 7]) from checkpoint, the shape in current model is torch.Size([8, 1, 7, 7]). size mismatch for encoder_conv1_2.conv.weight: copying a param with shape torch.Size([32, 16, 5, 5]) from checkpoint, the shape in current model is torch.Size([16, 8, 5, 5]). size mismatch for encoder_conv2_1.conv.weight: copying a param with shape torch.Size([64, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]). size mismatch for encoder_conv2_2.conv.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for encoder_conv2_3.conv.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for encoder_conv3_1.conv.weight: copying a param with shape torch.Size([128, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 32, 3, 3]). size mismatch for encoder_conv3_2.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_conv3_3.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_conv4.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_resblock1.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_resblock1.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_resblock2.conv1.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for encoder_resblock2.conv2.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for decoder_deconv4.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for decoder_conv_id_3.0.weight: copying a param with shape torch.Size([128, 256, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]). size mismatch for decoder_deconv3_3.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for decoder_deconv3_2.conv.weight: copying a param with shape torch.Size([128, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]). size mismatch for decoder_deconv3_1.conv.weight: copying a param with shape torch.Size([64, 128, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3]). size mismatch for decoder_conv_id_2.0.weight: copying a param with shape torch.Size([64, 128, 1, 1]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]). size mismatch for decoder_deconv2_3.conv.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for decoder_deconv2_2.conv.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]). size mismatch for decoder_deconv2_1.conv.weight: copying a param with shape torch.Size([32, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([16, 32, 3, 3]). size mismatch for decoder_conv_id_1.0.weight: copying a param with shape torch.Size([32, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 32, 1, 1]). size mismatch for decoder_deconv1_2.conv.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([8, 16, 3, 3]). size mismatch for decoder_deconv1_1.conv.weight: copying a param with shape torch.Size([1, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([1, 8, 3, 3]).
Can you kindly share how to fix this?