medical-images-sr icon indicating copy to clipboard operation
medical-images-sr copied to clipboard

Axes not matching while training

Open Ayushkumawat opened this issue 2 years ago • 3 comments

image

I am getting this error while testing in "real_esrgan_notebook.ipynb" file . . This is the code :

image

'outscale' is set to '4' by default but my dataset is having some different outscale value which is causing this error, but I don't know what will be my value or how to find it . Also before this, there were many issues just because of the difference in datasets. I request you to please share the actual dataset which was used for training and testing.

Ayushkumawat avatar Feb 05 '23 12:02 Ayushkumawat

The official page to download the dataset does not show up https://www.med.upenn.edu/sbia/brats2018.html Check if the model has been generated in the directory and if the name is correct.

image Expand this section marked as yellow in the screenshot below and check where the error is originating from and try to debug from there.

image Check the size of the array

image

ShawkhIbneRashid avatar Feb 05 '23 16:02 ShawkhIbneRashid

I have rechecked model_path and it is correct, which was generated by the training process image . These are the testing files in .npy format image . The size of all the testing files are image . . This is the code : image image

. . This is the error in expanded :

ValueError Traceback (most recent call last) in 46 warnings.warn('The input image is small, try X4 model for better performance.') 47 ---> 48 output = upsampler.enhance(img, outscale=outscale) 49 50 generated_realesrgan.append(output)

4 frames /usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 25 def decorate_context(*args, **kwargs): 26 with self.clone(): ---> 27 return func(*args, **kwargs) 28 return cast(F, decorate_context) 29

/content/drive/.shortcut-targets-by-id/1Rk3RugF_g7_wced0TMfXG-OFkPNmOxr4/Colab Notebooks/ESRGan/Real-ESRGAN/realesrgan/utils.py in enhance(self, img, outscale, alpha_upsampler) 147 148 # ------------------- process image (without the alpha channel) ------------------- # --> 149 self.pre_process(img) 150 if self.tile_size > 0: 151 self.tile_process()

/content/drive/.shortcut-targets-by-id/1Rk3RugF_g7_wced0TMfXG-OFkPNmOxr4/Colab Notebooks/ESRGan/Real-ESRGAN/realesrgan/utils.py in pre_process(self, img) 42 43 def pre_process(self, img): ---> 44 img = torch.from_numpy(np.transpose(img, (2, 0, 1))).float() 45 self.img = img.unsqueeze(0).to(self.device) 46 if self.half:

<array_function internals> in transpose(*args, **kwargs)

/usr/local/lib/python3.8/dist-packages/numpy/core/fromnumeric.py in transpose(a, axes) 658 659 """ --> 660 return _wrapfunc(a, 'transpose', axes) 661 662

/usr/local/lib/python3.8/dist-packages/numpy/core/fromnumeric.py in _wrapfunc(obj, method, *args, **kwds) 55 56 try: ---> 57 return bound(*args, **kwds) 58 except TypeError: 59 # A TypeError occurs if the object does have such a method in its

ValueError: axes don't match array

Ayushkumawat avatar Feb 05 '23 18:02 Ayushkumawat

Try to use one of the downloaded weights instead of the one generated from training to see if it solves this issue. Check the .yml file to see what the scale is set to.

ShawkhIbneRashid avatar Feb 05 '23 18:02 ShawkhIbneRashid