EcoDepth icon indicating copy to clipboard operation
EcoDepth copied to clipboard

cannot load model

Open BaderTim opened this issue 9 months ago • 0 comments

Hi, I receive an error if I load the code manually this way:

args = argparse.Namespace()

# Manually set the arguments
args.min_depth = 1e-3
args.max_depth = 128
args.flip_test = True
args.ckpt_dir = "./checkpoints/kitti.ckpt"
args.vit_model = "google/vit-base-patch16-224"
args.max_depth_eval = 128
args.no_of_classes = 200
args.deconv_kernels = [2, 2, 2]
args.num_filters = [32, 32, 32]
args.num_deconv = 3

DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

model = EcoDepth(args=args)
model_weight = torch.load(args.ckpt_dir)['model']
model.load_state_dict(model_weight)
model.to(DEVICE)
model.eval()

when executing this code I receive

LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels

Loading openai/clip-vit-large-patch14 to CLIPTextModel.....

Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ...

- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Loaded openai/clip-vit-large-patch14 to CLIPTextModel.....

ddpm.py: Restored from ../checkpoints/v1-5-pruned-emaonly.ckpt with 0 missing and 2 unexpected keys
Unexpected Keys: ['model_ema.decay', 'model_ema.num_updates']

RuntimeError                              Traceback (most recent call last)
Cell In[27], [line 19](vscode-notebook-cell:?execution_count=27&line=19)
     [17](vscode-notebook-cell:?execution_count=27&line=17) model = EcoDepth(args=args)
     [18](vscode-notebook-cell:?execution_count=27&line=18) model_weight = torch.load(args.ckpt_dir)['model']
---> [19](vscode-notebook-cell:?execution_count=27&line=19) model.load_state_dict(model_weight)
     [20](vscode-notebook-cell:?execution_count=27&line=20) model.to(DEVICE)
     [21](vscode-notebook-cell:?execution_count=27&line=21) model.eval()

File /anaconda/envs/ecodepth/lib/python3.11/site-packages/torch/nn/modules/module.py:2041, in Module.load_state_dict(self, state_dict, strict)

RuntimeError: Error(s) in loading state_dict for EcoDepth:
	size mismatch for encoder.cide_module.embeddings: copying a param with shape torch.Size([100, 768]) from checkpoint, the shape in current model is torch.Size([200, 768]).
	size mismatch for encoder.cide_module.fc.2.weight: copying a param with shape torch.Size([100, 400]) from checkpoint, the shape in current model is torch.Size([200, 400]).
	size mismatch for encoder.cide_module.fc.2.bias: copying a param with shape torch.Size([100]) from checkpoint, the shape in current model is torch.Size([200]).

BaderTim avatar May 22 '24 17:05 BaderTim