MVDet
MVDet copied to clipboard
Error when load checkpoint: MultiviewDetector.pth
When I run the code : ` resume_fname = resume_dir + '/MultiviewDetector.pth'
model.load_state_dict(torch.load(resume_fname, map_location='cuda:0'))`
I come across an error:
RuntimeError: Error(s) in loading state_dict for PerspTransDetector: Missing key(s) in state_dict: "map_classifier.0.weight", "map_classifier.0.bias", "map_classifier.2.weight", "map_classifier.2.bias", "map_classifier.4.weight". Unexpected key(s) in state_dict: "base_pt1.0.weight", "base_pt1.1.weight", "base_pt1.1.bias", "base_pt1.1.running_mean", "base_pt1.1.running_var", "base_pt1.1.num_batches_tracked", "base_pt1.4.0.conv1.weight", "base_pt1.4.0.bn1.weight", "base_pt1.4.0.bn1.bias", "base_pt1.4.0.bn1.running_mean", "base_pt1.4.0.bn1.running_var", "base_pt1.4.0.bn1.num_batches_tracked", "base_pt1.4.0.conv2.weight", "base_pt1.4.0.bn2.weight", "base_pt1.4.0.bn2.bias", "base_pt1.4.0.bn2.running_mean", "base_pt1.4.0.bn2.running_var", "base_pt1.4.0.bn2.num_batches_tracked", "base_pt1.4.1.conv1.weight", "base_pt1.4.1.bn1.weight", "base_pt1.4.1.bn1.bias", "base_pt1.4.1.bn1.running_mean", "base_pt1.4.1.bn1.running_var", "base_pt1.4.1.bn1.num_batches_tracked", "base_pt1.4.1.conv2.weight", "base_pt1.4.1.bn2.weight", "base_pt1.4.1.bn2.bias", "base_pt1.4.1.bn2.running_mean", "base_pt1.4.1.bn2.running_var", "base_pt1.4.1.bn2.num_batches_tracked", "base_pt1.5.0.conv1.weight", "base_pt1.5.0.bn1.weight", "base_pt1.5.0.bn1.bias", "base_pt1.5.0.bn1.running_mean", "base_pt1.5.0.bn1.running_var", "base_pt1.5.0.bn1.num_batches_tracked", "base_pt1.5.0.conv2.weight", "base_pt1.5.0.bn2.weight", "base_pt1.5.0.bn2.bias", "base_pt1.5.0.bn2.running_mean", "base_pt1.5.0.bn2.running_var", "base_pt1.5.0.bn2.num_batches_tracked", "base_pt1.5.0.downsample.0.weight", "base_pt1.5.0.downsample.1.weight", "base_pt1.5.0.downsample.1.bias", "base_pt1.5.0.downsample.1.running_mean", "base_pt1.5.0.downsample.1.running_var", "base_pt1.5.0.downsample.1.num_batches_tracked", "base_pt1.5.1.conv1.weight", "base_pt1.5.1.bn1.weight", "base_pt1.5.1.bn1.bias", "base_pt1.5.1.bn1.running_mean", "base_pt1.5.1.bn1.running_var", "base_pt1.5.1.bn1.num_batches_tracked", "base_pt1.5.1.conv2.weight", "base_pt1.5.1.bn2.weight", "base_pt1.5.1.bn2.bias", "base_pt1.5.1.bn2.running_mean", "base_pt1.5.1.bn2.running_var", "base_pt1.5.1.bn2.num_batches_tracked", "base_pt1.6.0.conv1.weight", "base_pt1.6.0.bn1.weight", "base_pt1.6.0.bn1.bias", "base_pt1.6.0.bn1.running_mean", "base_pt1.6.0.bn1.running_var", "base_pt1.6.0.bn1.num_batches_tracked", "base_pt1.6.0.conv2.weight", "base_pt1.6.0.bn2.weight", "base_pt1.6.0.bn2.bias", "base_pt1.6.0.bn2.running_mean", "base_pt1.6.0.bn2.running_var", "base_pt1.6.0.bn2.num_batches_tracked", "base_pt1.6.0.downsample.0.weight", "base_pt1.6.0.downsample.1.weight", "base_pt1.6.0.downsample.1.bias", "base_pt1.6.0.downsample.1.running_mean", "base_pt1.6.0.downsample.1.running_var", "base_pt1.6.0.downsample.1.num_batches_tracked", "base_pt1.6.1.conv1.weight", "base_pt1.6.1.bn1.weight", "base_pt1.6.1.bn1.bias", "base_pt1.6.1.bn1.running_mean", "base_pt1.6.1.bn1.running_var", "base_pt1.6.1.bn1.num_batches_tracked", "base_pt1.6.1.conv2.weight", "base_pt1.6.1.bn2.weight", "base_pt1.6.1.bn2.bias", "base_pt1.6.1.bn2.running_mean", "base_pt1.6.1.bn2.running_var", "base_pt1.6.1.bn2.num_batches_tracked", "world_classifier.0.weight", "world_classifier.0.bias", "world_classifier.2.weight", "world_classifier.2.bias", "world_classifier.4.weight".
Please tell me how can I resolve it?