brain-segmentation-pytorch
brain-segmentation-pytorch copied to clipboard
IndexError: too many indices for array
Hi, I am testing your code and ended up with the following error. My test data has 11 patients with around 800 .tifs (400 images, 400 masks) per patient. I found out this error might because either I've given too many index values and my data probably not 2D. But I double-checked this is not the case.
from PIL import Image
import numpy
im = Image.open('case_00001_imaging_418.tif')
imarray = numpy.array(im)
imarray
imarray.shape
(100,523)
array([[-1006.0074 , -1008.0335 , -1002.4894 , ..., -1017.4314 ,
-1023.156 , -1023.63153],
[ -958.95374, -1015.2491 , -1019.2383 , ..., -1013.8681 ,
-1013.6993 , -1022.19446],
[-1012.0173 , -1010.1507 , -1000.2253 , ..., -1003.897 ,
-1016.59955, -1019.79047],
...,
[-1008.00464, -1015.87286, -1019.1767 , ..., -1000.86926,
-1017.2503 , -1024.9915 ],
[ -990.96545, -999.8485 , -1001.993 , ..., -1014.1718 ,
-1014.19946, -1024.5924 ],
[ -994.9882 , -1003.7568 , -1014.58484, ..., -1024.8237 ,
-1012.564 , -1020.57794]], dtype=float32)
$ train_validate()
reading train images... preprocessing train volumes... cropping train volumes...
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-28-c37cc2aa5294> in <module>()
----> 1 train_validate()
<ipython-input-27-a6322369edd2> in train_validate()
2 device = torch.device("cpu" if not torch.cuda.is_available() else "cuda:0")
3
----> 4 loader_train, loader_valid = data_loaders(batch_size, workers, image_size, aug_scale, aug_angle)
5 loaders = {"train": loader_train, "valid": loader_valid}
6
<ipython-input-17-69b23f09d135> in data_loaders(batch_size, workers, image_size, aug_scale, aug_angle)
1 def data_loaders(batch_size, workers, image_size, aug_scale, aug_angle):
----> 2 dataset_train, dataset_valid = datasets("/labs/mpsnyder/gbogu17/kits_2019/kits19/data_tiff", image_size, aug_scale, aug_angle)
3
4 def worker_init(worker_id):
5 np.random.seed(42 + worker_id)
<ipython-input-18-ff0e5f728339> in datasets(images, image_size, aug_scale, aug_angle)
4 subset="train",
5 image_size=image_size,
----> 6 transform=transforms(scale=aug_scale, angle=aug_angle, flip_prob=0.5),
7 )
8 valid = BrainSegmentationDataset(
<ipython-input-7-f22e3a5b7539> in __init__(self, images_dir, transform, image_size, subset, random_sampling, seed)
56 print("cropping {} volumes...".format(subset))
57 # crop to smallest enclosing volume
---> 58 self.volumes = [crop_sample(v) for v in self.volumes]
59
60 print("padding {} volumes...".format(subset))
<ipython-input-7-f22e3a5b7539> in <listcomp>(.0)
56 print("cropping {} volumes...".format(subset))
57 # crop to smallest enclosing volume
---> 58 self.volumes = [crop_sample(v) for v in self.volumes]
59
60 print("padding {} volumes...".format(subset))
<ipython-input-3-e45584d890d1> in crop_sample(x)
16 return (
17 volume[z_min:z_max, y_min:y_max, x_min:x_max],
---> 18 mask[z_min:z_max, y_min:y_max, x_min:x_max],
19 )
IndexError: too many indices for array
Thanks for your interest in this repo. Image slices are expected to be 3D: 2 spacial dimensions and 1 for 3 channels. Masks slices are 2D. One image volume is expected to be 4D: an array of 3D image slices. Corresponding mask volume is expected to be 3D: an array of 2D mask slices. At the end of the dataset initialization, mask slices are expanded to 3D volumes with one channel.
# add channel dimension to masks
self.volumes = [(v, m[..., np.newaxis]) for (v, m) in self.volumes]
Variable volumes is expected to be a list of tuples that contain 4D image volume and 4D mask volume.
The order of dimensions is: [slices, height, width, channels].
For example, an image volume with 10 slices and slices of size 200x300 has shape (10, 200, 300, 3) and corresponding mask volume has shape (10, 200, 300, 1).
I hope it helps.
Thank you for the quick response and detailed explanation. I converted .nii.gz files to tiff by thinking that all the input to your code is 2D. Do you know how can I convert .nii.gz files into .tff (3D images and 2D masks)?
The model is 2D with 3-channel input but on a higher level it segments volumes.
If you have files for only one modality, you can try to copy it to have 3-channel slices or use gray2rgb function from skimage: https://scikit-image.org/docs/dev/api/skimage.color.html#skimage.color.gray2rgb.
If you have files for three modalities, you have to register them first and then read in the same order and concatenate along the lest dimension/axis.
For masks, you can read them with imread function from skimage with as_gray=True: https://scikit-image.org/docs/dev/api/skimage.io.html#skimage.io.imread.