Qsingle

Results 61 comments of Qsingle

I think the reason is not that you do not install the former. Could you provide more information?

What is the `num_class` changed to? Do you divide the mask by 255 to scale the value of the label to `0` or `1`?

> #6 Since reading the previous discussion, I changed the num_classes to 255 and still reported the same error, so I changed it to 256 and it ran successfully after...

You can use the following codes to get the final mask. ```python pred = model(x) pred = torch.softmax(pred, dim=1) pred = torch.max(pred, dim=1)[1] # get the max value and the...

You can follow our paper to see the configuration. The way to change the configuration can be seen in #5 and #3 .

> After dividing the pixels of the binary image by 255 and setting num_class to 2 for training, I encountered an error when loading the model for segmentation:Missing key(s) in...

> yes im sure When the checkpoint is not correct, then this wrong occurred. Could you provide the command and the whole log? Thank you very much.

> `python train_learnable_sam.py --image C:\Users\Duan\Desktop\LearnablePromptSAM-main\train\image_cut --mask_path C:\Users\Duan\Desktop\LearnablePromptSAM-main\train\gt_cut --model_name vit_b --checkpoint C:\Users\Duan\Desktop\LearnablePromptSAM-main\ckpts\sam_vit_b_01ec64.pth --save_path C:\Users\Duan\Desktop\LearnablePromptSAM-main\ckpts --lr 0.05 --mix_precision --optimizer sgd` Sorry, please provide the log of the training. You can try this...

Thank you very much. This is one limitation of current research work: We can not do any size super-resolution. So we set the output size to one fixed size, e.g.,...

The Position Embedding of the SAM requires the input size to be 1024x1024, you can resize the position embedding for the SAM or adjust the input size of the image.