BrainMaGe icon indicating copy to clipboard operation
BrainMaGe copied to clipboard

Are you required to pad images to dimension (240, 240, 160)?

Open jayurbain opened this issue 3 years ago • 9 comments

All of my images are unit volume. RAS orientation. BTW: I've tried RAS per documentation and also RAI and see no difference.

I'm getting reasonably good masks running python ../BrainMaGe/brain_mage_single_run -i $input_brain_path -o 'output_mask.nii.gz' -m 'output_brain.nii.gz' -dev 0

image

However, when I run batch mode: python ../BrainMaGe/brain_mage_run -params test_params_multi_4_2020.cfg -test True -mode Multi-4 -dev 0

I receive the following error: `Weight file used : /home/BrainMaGe/weights/resunet_multi_4.pt ../BrainMaGe/brain_mage_run

Hostname :None

Start Time :Tue Sep 21 11:26:16 2021

Start Stamp:1632223576.6366262 Generating Test csv 100%|███████████████████████████████████████████| 61/61 [11:05<00:00, 10.91s/it] Done with running the model. You chose to save the brain. We are now saving it with the masks. 0%| | 0/61 [00:00<?, ?it/s] Traceback (most recent call last): File "../BrainMaGe/brain_mage_run", line 215, in test_multi_4.infer_multi_4(params_file, DEVICE, args.save_brain, weights) File "/home/jupyter/BrainMaGe/BrainMaGe/tester/test_multi_4.py", line 157, in infer_multi_4 image_data[mask_data == 0] = 0 IndexError: boolean index did not match indexed array along dimension 0; dimension is 220 but corresponding boolean dimension is 240`

Here is line 157: image_data[mask_data == 0] = 0 https://github.com/CBICA/BrainMaGe/blob/master/BrainMaGe/tester/test_multi_4.py#:~:text=image_data%5Bmask_data%20%3D%3D%200%5D%20%3D%200

It is dependent on line 126. It looks like the images are hard-code to be (240, 240, 160) to_save = interpolate_image(output, (240, 240, 160)) https://github.com/CBICA/BrainMaGe/blob/master/BrainMaGe/tester/test_multi_4.py#:~:text=to_save%20%3D%20interpolate_image(output%2C%20(240%2C%20240%2C%20160))

Am I required to pad images to dimension (240, 240, 160) to use multi_4 mode?

Any guidance would be appreciated.

Thanks, Jay

jayurbain avatar Sep 21 '21 16:09 jayurbain

Have you passed the images through these steps https://github.com/CBICA/BrainMaGe/#steps-to-run-application? For the multi-modality segmentation, the images do need to pass through these steps.

However, we do suggest using 'brain_mage_single_run' when possible.

Geeks-Sid avatar Sep 21 '21 18:09 Geeks-Sid

Yes, I followed the steps. Note: I did not use the Brain Imaging Toolkit. We have an existing pipeline where all of the images are registered.

I also ensured the orientation is RAI and normalize the images to unit volume. The dimensions of the preprocessed images will vary.

As I indicated earlier, 'brain_mage_single_run' did work for me. But I have a lot of images to skull strip.

The index error seems to indicate a sizing problem. So my basic question remains. Does multi_4 mode require images of a specific dimensionality?

Thanks.

jayurbain avatar Sep 21 '21 19:09 jayurbain

The dimensions of the preprocessed images will vary.

This is the issue that step 1 in https://github.com/CBICA/BrainMaGe/#steps-to-run-application tackles. The output of our preprocessing pipeline ensures that the images are in the space the model expects.

Cheers.

sarthakpati avatar Sep 21 '21 19:09 sarthakpati

"1. Co-registration within patient to the SRI-24 atlas in the LPS/RAI space."

Yes, I have done this.

Are there any additional specifics like the dimension of the image?

jayurbain avatar Sep 22 '21 14:09 jayurbain

Could you please try again using CaPTk if you are seeing different image dimensions/shape?

sarthakpati avatar Sep 22 '21 14:09 sarthakpati

@jayurbain , The dimensions of the image should also match [240, 240, 160] as mentioned in the paper. The brain_mage_single_run should also work for all the cases individually.

Geeks-Sid avatar Sep 23 '21 10:09 Geeks-Sid

Hey @jayurbain,

The padding should happen automatically by our preprocessing function, but we are debugging this and will get back to you.

Cheers, Sarthak

sarthakpati avatar Sep 23 '21 12:09 sarthakpati

Thanks. I did find a problem with how our images are standardized and corrected it. For now, the single run agnostic approach is working well and I haven't found it to be that slow. ~10sec with GPU and I use the same mask on all 4 registered series. When I get a chance I will retest for Multi-4.

jayurbain avatar Sep 26 '21 15:09 jayurbain

Hey Jay,

To be honest, we use the modality-agnostic model for all our pipelines, so you should be fine! In any case, @Geeks-Sid is debugging and he should reach out here once he is ready.

Cheers, Sarthak

sarthakpati avatar Sep 26 '21 17:09 sarthakpati