MedicalZooPytorch
MedicalZooPytorch copied to clipboard
RuntimeError: Expected 5-dimensional input for 5-dimensional weight 8 4 3 3, but got 4-dimensional input of size [4, 256, 64, 64] instead
Can you tell me why cone to it and how to solve it?
Traceback (most recent call last):
File "train_brats2018_new.py", line 75, in
I have the same problem, did you solve it?
I have the same problem. I guess that the brats2018 dataset could not share the code of the dataloader with other dataset...
Do we need to rewrite the Dataset class?
Hello everyone and sorry for the late response.. We are currently working on a big restructuring of the project. We are rewriting the dataset class as well so hopefully, this problem will be solved. So please have a little patience as we do that in our free time.
I will look into this issue tomorrow ( or within the weekend) and let you know.
Hi. How's it going about this bug?
Can you tell me why cone to it and how to solve it? Traceback (most recent call last): File "train_brats2018_new.py", line 75, in main() File "train_brats2018_new.py", line 33, in main trainer.training() File "/data/Disk_A/guoyang/mqp/MedicalZooPytorch-master/lib/train/trainer.py", line 38, in training self.validate_epoch(epoch) File "/data/Disk_A/guoyang/mqp/MedicalZooPytorch-master/lib/train/trainer.py", line 83, in validate_epoch output = self.model(input_tensor) File "/data/Disk_A/guoyang/.conda/envs/pt041/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/data/Disk_A/guoyang/mqp/MedicalZooPytorch-master/lib/medzoo/Unet3D.py", line 114, in forward out = self.conv3d_c1_1(x) File "/data/Disk_A/guoyang/.conda/envs/pt041/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/data/Disk_A/guoyang/.conda/envs/pt041/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 478, in forward self.padding, self.dilation, self.groups) RuntimeError: Expected 5-dimensional input for 5-dimensional weight 8 4 3 3, but got 4-dimensional input of size [4, 256, 64, 64] instead
You can change line 117 of file brats2019.py as follows:
return torch.FloatTensor(img_t1.copy()).unsqueeze(0), torch.FloatTensor(img_t1ce.copy()).unsqueeze( 0), torch.FloatTensor(img_t2.copy()).unsqueeze(0), torch.FloatTensor(img_flair.copy()).unsqueeze( 0), torch.FloatTensor(img_seg.copy())
I have faced the same problem, some one solved it?
I was working with BRATS2018 and got this error, fixed it by changing
in lib/medloaders/brats2018.py
remove line 117 return instruction
and taking out line 113 outside the if condition
now from line 105 onwards (getitem) is a follows
def __getitem__(self, index):
f_t1, f_t1ce, f_t2, f_flair, f_seg = self.list[index]
img_t1, img_t1ce, img_t2, img_flair, img_seg = np.load(f_t1), np.load(f_t1ce), np.load(f_t2), np.load(
f_flair), np.load(f_seg)
if self.mode == 'train' and self.augmentation:
[img_t1, img_t1ce, img_t2, img_flair], img_seg = self.transform([img_t1, img_t1ce, img_t2, img_flair],
img_seg)
return torch.FloatTensor(img_t1.copy()).unsqueeze(0), torch.FloatTensor(img_t1ce.copy()).unsqueeze(
0), torch.FloatTensor(img_t2.copy()).unsqueeze(0), torch.FloatTensor(img_flair.copy()).unsqueeze(
0), torch.FloatTensor(img_seg.copy())
similar to @YouTaoBaBa solution for brats2019
Yup with a quick look this should do the trick. By the way, feel free to open a pr so we can fix this and similar issues. As you might have guessed, we don't have much time to actively develop this project so we are looking for contributors to build upon our initial code. @YouTaoBaBa @GokulDas027