Dual_SAM
Dual_SAM copied to clipboard
Problems when testing model
在训练模型时,我已经将sam_vit_b_01ec64.pth预先下载到目录下,训练可以正常进行,最后的usod_prompt2.pth可以正常保存。
#code from train_s.py
sam = sam_model_registry["vit_b"](checkpoint='sam_vit_b_01ec64.pth')#"sam_vit_b_01ec64.pth")
sam = sam[0]
model = LoRA_Sam(sam,4).cuda()
.....
train_path = '../Datasets/USOD10k/TR'
cfg = dataset_medical.Config(datapath=train_path, savepath='./saved_model/msnet', mode='train', batch=16, lr=0.05, momen=0.9, decay=5e-4, epoch=50)
data = dataset_medical.Data(cfg)
我的一个疑问是这里的Config里面,savepath的作用是什么,因为实际运行后我的这里面并没有文件保存。
#output during training
......
sses3: 4.726 | losses4: 4.571 | losses5: 4.522 | losses6: 4.452 | losses7: 4.440 | losses8: 0.001 | losses9: 0.001 | losses10: 0.002 | losses11: 0.002 | losses12: 0.001 | losses13: 0.002 | losses14: 0.002 | losses15: 0.002
97%|█████████▋| 1400/1436 [12:14<00:18, 1.90it/s, epoch=0]1400 | losses0: 3.785 | losses1: 3.810 | losses2: 3.748 | losses3: 3.816 | losses4: 3.912 | losses5: 3.961 | losses6: 3.914 | losses7: 3.741 | losses8: 0.001 | losses9: 0.001 | losses10: 0.001 | losses11: 0.001 | losses12: 0.001 | losses13: 0.001 | losses14: 0.001 | losses15: 0.001
100%|██████████| 1436/1436 [12:33<00:00, 1.91it/s, epoch=0
但是,在模型保存完毕,想要进行test时,产生了如下的报错
# code from test_y.py
sam = sam_model_registry["vit_b"](checkpoint="sam_vit_b_01ec64.pth")
sam = sam[0]
model = LoRA_Sam(sam,4).cuda()
#path ="mas3k_prompt2.pth"
path = "usod_prompt2.pth"
model.load_state_dict(torch.load(path, weights_only=True))
您的代码中,model在训练和测试阶段都是 LoRA_Sam(sam,4).cuda(),我不太清楚造成这个问题的原因。我对代码的改动之处在于把Global_adapter中self.blocks这一段取消了注释,因为把blocks注释掉了后,训练的代码不能跑通,会提示AttributeError: 'Global_adapter' object has no attribute 'blocks'
那个global_adapter不能注释,我这个类别在这里实例化之后,后面有用到这个类别的函数。save_path那里确实没用呀,我是从之前别的项目的训练代码中,改过来的,所以那里确实没用。
请问这个问题解决了吗