segmentation_models.pytorch
segmentation_models.pytorch copied to clipboard
Multi-class segmentation can't predict classes other than 0, 1
Hello,
Thanks for your great contribution. I used your model to train my images dataset, in which there are 5 classes. I tried Unet and DeepLabV3+ in different activation functions and loss = DiceLoss. However, I usually get perfect diceloss and iou because most of pixels belong to 0 class, but the model never can predict classes 2, 3 or 4. Do you know what's going wrong?
Thanks, Wei
could you upload core lines of code? maybe there's some minor neglect caused the problem, because it works perfectly for me.
I am also having some trouble with the multi-class segmentation code. I changed the activation function to "softmax" and the dice function in utils for the one that was found in losses. And changed the masks to the argmax such that is one layer as opposed to "RGB" so the labels are now shape [8,256,256] (where the 8 is the batch size and 256 the size of the image) and the images are [8,3,256,256] (since they are RGB) . Besides that everything is the same as the one class segmentation code which worked fine.
The error I get is "RuntimeError: The size of tensor a (8) must match the size of tensor b (3) at non-singleton dimension 1"
Any insights would be really helpful! Thank you in advance!
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 7 days.
This issue was closed because it has been stalled for 7 days with no activity.
I am also having some trouble with the multi-class segmentation code. I changed the activation function to "softmax" and the dice function in utils for the one that was found in losses. And changed the masks to the argmax such that is one layer as opposed to "RGB" so the labels are now shape [8,256,256] (where the 8 is the batch size and 256 the size of the image) and the images are [8,3,256,256] (since they are RGB) . Besides that everything is the same as the one class segmentation code which worked fine.
The error I get is "RuntimeError: The size of tensor a (8) must match the size of tensor b (3) at non-singleton dimension 1"
Any insights would be really helpful! Thank you in advance!
have you solved the problem?
I am also having some trouble with the multi-class segmentation code. I changed the activation function to "softmax" and the dice function in utils for the one that was found in losses. And changed the masks to the argmax such that is one layer as opposed to "RGB" so the labels are now shape [8,256,256] (where the 8 is the batch size and 256 the size of the image) and the images are [8,3,256,256] (since they are RGB) . Besides that everything is the same as the one class segmentation code which worked fine.
The error I get is "RuntimeError: The size of tensor a (8) must match the size of tensor b (3) at non-singleton dimension 1"
Any insights would be really helpful! Thank you in advance!
Hello! Have you solved the problem you quoted last year? I am having the same problem as you did. The batch size couldn't be set to more than 1, this code can only run when I set batch size to 1. I have no idea how to deal with it.
I am also having some trouble with the multi-class segmentation code. I changed the activation function to "softmax" and the dice function in utils for the one that was found in losses. And changed the masks to the argmax such that is one layer as opposed to "RGB" so the labels are now shape [8,256,256] (where the 8 is the batch size and 256 the size of the image) and the images are [8,3,256,256] (since they are RGB) . Besides that everything is the same as the one class segmentation code which worked fine. The error I get is "RuntimeError: The size of tensor a (8) must match the size of tensor b (3) at non-singleton dimension 1" Any insights would be really helpful! Thank you in advance!
have you solved the problem?
Hello! Have you solved your problem?
Has there been any progress on this issue? There is currently no workaround and/or update on a solution. I am having the same issue.