Unet-Segmentation-Pytorch-Nest-of-Unets icon indicating copy to clipboard operation
Unet-Segmentation-Pytorch-Nest-of-Unets copied to clipboard

RuntimeError: output with shape [1, 800, 600] doesn't match the broadcast shape [3, 800, 600]

Open Alex28132 opened this issue 1 year ago • 1 comments

/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:139: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate warnings.warn("Detected call of lr_scheduler.step() before optimizer.step(). " /home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:152: UserWarning: The epoch parameter in scheduler.step() was not necessary and is being deprecated where possible. Please use scheduler.step() to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case: https://github.com/pytorch/pytorch/issues/new/choose. warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) /home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torch/optim/lr_scheduler.py:814: UserWarning: To get the last learning rate computed by the scheduler, please use get_last_lr(). warnings.warn("To get the last learning rate computed by the scheduler, " Traceback (most recent call last): File "/home/itic/PycharmFile/Unet-Segmentation-Pytorch-Nest-of-Unets-master/pytorch_run.py", line 295, in s_label = data_transform(im_label) File "/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 95, in call img = t(img) File "/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in call_impl return forward_call(*args, **kwargs) File "/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torchvision/transforms/transforms.py", line 277, in forward return F.normalize(tensor, self.mean, self.std, self.inplace) File "/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 363, in normalize return F_t.normalize(tensor, mean=mean, std=std, inplace=inplace) File "/home/itic/anaconda3/envs/pytorch_gpu2.0_tensorflow_gpu2.8/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py", line 928, in normalize return tensor.sub(mean).div(std) RuntimeError: output with shape [1, 800, 600] doesn't match the broadcast shape [3, 800, 600]

The image I inputted is 3 channels, and the label image is 1. I have seen that the issue 51 is similar to my problem, but there is no way to solve it. I feel that the issue 51 has not been solved either. Do you have any solutions

Alex28132 avatar Dec 19 '23 05:12 Alex28132

It can be due to data transformation uses a single channel. Let me know if you are still using it and facing this issue.

bigmb avatar Mar 01 '24 22:03 bigmb

I also face this problem ,I tried to change the defination of the class Image_dataset_folder, but I don’t know how to fix it

lifeisawar41 avatar Mar 26 '24 11:03 lifeisawar41

I don't know if the data_transform in 120 rows in pytorch_run.py can be useful for the class Image_dataset_folder

lifeisawar41 avatar Mar 26 '24 11:03 lifeisawar41

Check the data_loader inputs for transformation. The std and mean requires 3 channel. If you want to use the same transformation for labels then you will need to convert it to 1 channel.

bigmb avatar Mar 26 '24 11:03 bigmb

Oh! thank you , the program seems to work. I have tried this before, haha, maybe it's because I commented out the code that converts label data to grayscale

lifeisawar41 avatar Mar 26 '24 12:03 lifeisawar41