LearnablePromptSAM icon indicating copy to clipboard operation
LearnablePromptSAM copied to clipboard

Is it possible to provide pre-training weights?

Open zxcvbnmkj opened this issue 9 months ago • 4 comments

Since I'm not using my own training set at the moment and only have single-digit images for testing, could you please provide me with your pre-trained weights so that I can test the segmentation of my current images?

zxcvbnmkj avatar Apr 06 '25 10:04 zxcvbnmkj

Since I'm not using my own training set at the moment and only have single-digit images for testing, could you please provide me with your pre-trained weights so that I can test the segmentation of my current images?

Sorry, which type of image? We only have the weights for CF, OCT, and OCTA. You can use the SAM weights directly.

Qsingle avatar Apr 07 '25 06:04 Qsingle

Since I'm not using my own training set at the moment and only have single-digit images for testing, could you please provide me with your pre-trained weights so that I can test the segmentation of my current images?

Sorry, which type of image? We only have the weights for CF, OCT, and OCTA. You can use the SAM weights directly.

A million thanks for your reply! I'm a beginner in this field, and a few days ago my teacher gave me a new assignment to label a few images of cellular filaments taken by a microscope. But the filaments are too faint to be seen by the human eye. So I tried to try if I could automatically segment out the filaments using the segmentation LLM, however I encountered the problem that sam easily segments out blocky areas. Then I saw your paper and felt it was very powerful, so I would like to try to see if I can use your model to help me accomplish this task. Since I only have 6 images, it is unlikely that I can fine-tune them specifically for the image features, and I don't have a good enough computer to fine-tune sam according to your method, so that's why I came to ask if you have fine-tuned the weights? I apologize for taking up your time!

Regarding my previous comment about pre-training weights, it doesn't seem to be accurate, I'm very sorry, I don't know if it would make more sense to call it fine-tuning the weights. Please correct me!

zxcvbnmkj avatar Apr 07 '25 08:04 zxcvbnmkj

  尊敬的作者您好,非常感谢您出色的工作和耐心解答!我使用了您提供的代码在眼球数据集FIVES上训练,得到的效果非常出色!我从FIVES中随机挑选了34张构建了小样本数据集,训练效果依旧非常出色,可完美分割眼球图像。   然后我又换了一个数据集(并非您论文中提到的,是一个我自己找的野生数据集),换成了眼球数据集DRIVE,在其上进行多次训练,但是每次iou只有零点几且都是下降的趋势。我以为是类别不平衡问题(数据集样本中前景点和背景点比例差不多为1:9),后面在交叉熵损失中加上了类别权重,这样操作后使得训练结果iou值提高到了13%,预测结果为下图这样:

Image

  这使我百思不得其解,两个眼球数据集FIVES、DRIVE看起来区别不大,但是训练效果都天差地别。二个数据集的mask图像都是【0、255】两种像素值,我将divide设置为了ture,使得标签转换到了【0、1】两个值,num_class设置为了2。即所有训练设置都是一样的。为了检查在__getitem__函数中是不是破坏了DRIVE数据集的什么信息,我可视化了该函数处理后的图像和标签(x和target),效果如下图,这体现了这一步也没有出错。在FIVES中训练所得模型,甚至也可以很好的分割DRIVE数据集图像。

Image

  我研究了一整天实在不知道这是为什么了,才想着来问问您,不知道您是否能看出什么头绪,望您不吝赐教!

  以下是DRIVE数据集的说明: 这些图像是在荷兰的一项糖尿病视网膜病变筛查计划中获取的。图像是使用Canon CR5非散瞳3CCD相机拍摄的,视场角(FOV)等于45度。每张图像的分辨率为584*565像素,每个颜色通道有8位。数据集总图片数40张,在这两个集合中,每张图像都有一个直径约为540像素的圆形视场掩码(FOV)。在训练集中,每张图像都有一位眼科专家进行的手动分割。其中图像后缀是.tif,标签后缀是.gif

zxcvbnmkj avatar Apr 10 '25 07:04 zxcvbnmkj

Since I'm not using my own training set at the moment and only have single-digit images for testing, could you please provide me with your pre-trained weights so that I can test the segmentation of my current images?

Sorry, which type of image? We only have the weights for CF, OCT, and OCTA. You can use the SAM weights directly.

Hello, dear authors. Thanks for your paper. Would you please provide your final weights for the ophthalmic datasets you have?

NITR098 avatar Aug 22 '25 12:08 NITR098