Medical-SAM-Adapter icon indicating copy to clipboard operation
Medical-SAM-Adapter copied to clipboard

How to use the code for Inference?

Open zeinebBC opened this issue 1 year ago • 9 comments

I'm seeking clarity on utilizing the code during inference for testing a fine-tuned model on a dataset without target masks. Is there any guidance provided in the associated paper or repository on how to perform this task effectively? What prompting techniques could I employ when I don't have information regarding the target masks' locations? How can I evaluate the accuracy of the predicted masks in the absence of target masks?

zeinebBC avatar Jan 10 '24 08:01 zeinebBC

+1, I was trying to use val.py, but no luck. May need author's help.

FJGEODEV avatar Feb 26 '24 04:02 FJGEODEV

  1. you cannot evaluate the prediction accuracy without target masks (to my understanding, Ground-Truth). 2. SAM is an interactive model, so it is a common assumption that the user would provide a prompt for each image (like a click on the target object or sth). In the code, we generate this prompt from target mask instead to simulate the user-given prompt. If you have neither user-given prompt nor target-mask-generated prompt, you may want to try the "segment everything" setting described in SAM paper. It is basically click-prompted the original image in grid, and pick the top-k high-confidence predicted objects of the model. For using it, you need to train the adapters under this setting.

WuJunde avatar Feb 26 '24 17:02 WuJunde

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

janexue001 avatar Mar 08 '24 13:03 janexue001

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

WuJunde avatar Mar 08 '24 21:03 WuJunde

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

Evaluation: The code can automatically evaluate the model on the test set during traing, set "--val_freq" to control how many epoches you want to evaluate once. You can also run val.py for the independent evaluation.

Result Visualization: You can set "--vis" parameter to control how many epoches you want to see the results in the training or evaluation process.

In default, everything will be saved at ./logs/

Thank you for your reply. The details of the training process can indeed be seen in logs. However, besides that, I want to see the visual segmentation results performed with the trained model.

janexue001 avatar Mar 14 '24 08:03 janexue001

Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results.

In addition, I would like to ask you one more question.

I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam.

All the best to you

-----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74)

According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

janexue001 avatar Mar 15 '24 02:03 janexue001

How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

Part-Work avatar Apr 06 '24 07:04 Part-Work

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


How can I evaluate 'OpticDisc_Fundus_SAM_1024.pth' and 'sam_vit_b_01ec64.pth' on 'REFUGE' dataset??

Issues-translate-bot avatar Apr 06 '24 07:04 Issues-translate-bot

Thank you very much for your reply. It is true that the detailed records of the model training process can be found in logs. However, in addition, I also want to see the visual segmentation results. In addition, I would like to ask you one more question. I tried multi-class segmentation by setting the "-multimask_output" in the cfg.py file to 2, which worked successfully with the sam model, but the "ValueError: Target size (torch.Size([16, 2, 256, 256])) must be the same as input size (torch.Size([16, 1, 256, 256]))" problem appeared with efficient_sam. All the best to you -----原始邮件----- 发件人:"Junde Wu" @.> 发送时间:2024-03-09 05:33:51 (星期六) 收件人: KidsWithTokens/Medical-SAM-Adapter @.> 抄送: janexue001 @.>, Comment @.> 主题: Re: [KidsWithTokens/Medical-SAM-Adapter] How to use the code for Inference? (Issue #74) According to the instruction of readme.md, I have trained and obtained best_checkpoint. May I ask how to call checkpoint for subsequent segmentation tasks? — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

You need to modify some part related to num_multimask_output in EfficientSAM following SAM's code.

visionbike avatar Sep 24 '24 01:09 visionbike