awesome-semantic-segmentation-pytorch icon indicating copy to clipboard operation
awesome-semantic-segmentation-pytorch copied to clipboard

换成灰度图数据集后,一直报错。

Open YLONl opened this issue 5 years ago • 6 comments

我换成了我的灰度图的数据集之后,然后一直报错,改了又错,错了又改,主要问题有维度参数,通道数,我改了,最后报错指向环境包文件,试了各种方法。最终还是没有解决。主要涉及文件dataloader里面的自己数据文件.py,segbase.py,train,py。 想问作者,换成[1,512,512]的灰度图后,需要改哪些地方?

YLONl avatar Dec 24 '19 08:12 YLONl

You need to change the first convolutional layer.

Tramac avatar Dec 24 '19 08:12 Tramac

very thank for your quick reply❤❤.i will try!!

YLONl avatar Dec 24 '19 08:12 YLONl

@Tramac hi, I changed the channel of the first con layer. and show:

RuntimeError: Error(s) in loading state_dict for VGG: size mismatch for features.0.weight: copying a param with shape torch.Size([64, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 1, 3, 3]).

So, i delete the '.ptk' of the pretrained_base. And I set the pretrained_base = False. again .It show :

RuntimeError: output with shape [1, 256, 256] doesn't match the broadcast shape [3, 256, 256]

OK,I googled and changed the code.

    input_transform = transforms.Compose([
        transforms.ToTensor(),
        #transforms.Lambda(lambda x: x.repeat(3,1,1)),
        #transforms.Normalize([.485, .456, .406], [.229, .224, .225]),
        transforms.Normalize([0.5], [0.5]),
    ])

but error again:

Traceback (most recent call last): File "train.py", line 334, in trainer.train() File "train.py", line 229, in train loss_dict = self.criterion(outputs, targets) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "/home/zhangxl003/sheyulong/all-seg-pytorch-1220/core/utils/loss.py", line 34, in forward return dict(loss=super(MixSoftmaxCrossEntropyLoss, self).forward(*inputs)) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/modules/loss.py", line 942, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2056, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/home/zhangxl003/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 1873, in nll_loss ret = torch._C._nn.nll_loss2d(input, target, weight, _Reduction.get_enum(reduction), ignore_index) RuntimeError: 1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [4, 256, 256, 3]

SO,what should i do? please...

YLONl avatar Dec 24 '19 14:12 YLONl

maybe what other parameter in the first convolutional layer do i change? or what?

YLONl avatar Dec 24 '19 14:12 YLONl

maybe what other parameter in the first convolutional layer do i change? or what?

我也遇到和你同样的错误,我不认为是灰度图像的问题。 我的报错和你完全一样,我进行了如下处理,离开恢复正常:

  • 打印target数据,发现数据不合理
  • 创新做标签为0-C的map

deadpoppy avatar Mar 22 '20 02:03 deadpoppy

maybe what other parameter in the first convolutional layer do i change? or what?

我也遇到和你同样的错误,我不认为是灰度图像的问题。 我的报错和你完全一样,我进行了如下处理,离开恢复正常:

  • 打印target数据,发现数据不合理
  • 创新做标签为0-C的map

How do you solve the problem"1only batches of spatial targets supported (non-empty 3D tensors) but got targets of size: : [4, 256, 256, 3]"? I got the same problem, Thank you. My WeChat is 18810220665

casiahnu avatar Apr 23 '20 15:04 casiahnu