nobrainer icon indicating copy to clipboard operation
nobrainer copied to clipboard

Absent segmentation after prediction

Open araikes opened this issue 1 year ago • 14 comments

Hello,

I was hoping to use nobrainer to do brain extraction on ex-vivo mouse brain MRIs and have been following the Google Colab brain extraction notebook. After running nobrainer on my data and attempting to predict on one of the training data, I get an empty image (all 0s). I've tried running the Google Colab notebook and obtain what appears to be the same result (see below, especially when letting nilearn define the cutpoints). Is there a way for me to debug what's happening and why my anticipated brain masks are empty?

Thanks

image

image

araikes avatar Feb 08 '24 23:02 araikes

I presume you trained a model on mouse brains. Correct? Let's take a step back and understand if the basic u-net model (or meshnet) described in the guide will perform well on mouse brains. What was the training performance like during training? Assuming all of that was taken care of, did you look at the image using Freeview or mricron instead, where you can interactively look at the image?

hvgazula avatar Feb 08 '24 23:02 hvgazula

one other thing, if you followed the training settings in the guide, they are only for demonstration purposes to allow running a tutorial quickly. you should modify the default settings for your use case and amount of data you have.

@hvgazula - we should really retrain and update the brain extractor on our side and release it in the zoo so that people can do other types of transfer learning.

satra avatar Feb 09 '24 02:02 satra

So a few answers for both @hvgazula and @satra:

  1. I did train using mouse data with 41 brains and brain masks. I know it's a small dataset but was more a sanity check as to whether I could get output at all before investing a lot of time.
  2. My image dimensions are 256x256x256, so that it would work with the example settings.
  3. It seemed like the segmentation worked, based on the output:
Total params: 4772961 (18.21 MB)
Trainable params: 4770625 (18.20 MB)
Non-trainable params: 2336 (9.12 KB)
__________________________________________________________________________________________________
288/288 [==============================] - 5340s 19s/step - loss: 0.1980 - dice: 0.8020 - val_loss: 0.1812 - val_dice: 0.8188
  1. I used the same "predict" call as in the example and saved the output using nib.save. Opening it in ITK Snap is a zero-filled image.
  2. The example Google Colab notebook (hitting "restart and run all") also produced what appeared to be an empty mask, so I don't know if something just isn't working.

araikes avatar Feb 09 '24 04:02 araikes

Any thoughts on this?

araikes avatar Feb 14 '24 16:02 araikes

@araikes thanks for checking. I will get to this later today or tomorrow. Working on fixing other related issues.

hvgazula avatar Feb 14 '24 16:02 hvgazula

@araikes Can you please email me at hvgazula AT umich DOT edu to set up a call to discuss this so I can take it further? Thanks.

hvgazula avatar Feb 14 '24 17:02 hvgazula

@hvgazula done.. you should have it shortly.

araikes avatar Feb 14 '24 18:02 araikes

@hvgazula, Finally got a GPU node and upped to 10 epochs (first) to see if that would work. Still produces an empty image.

araikes avatar Feb 15 '24 21:02 araikes

Try 50, please. The cluster on my end is down, so I am stuck a bit on this. :/

hvgazula avatar Feb 15 '24 21:02 hvgazula

My kernel dies when I try 50.

araikes avatar Feb 15 '24 22:02 araikes

Could you tell what the error is?

hvgazula avatar Feb 15 '24 22:02 hvgazula

No... it just says that it crashed

araikes avatar Feb 15 '24 22:02 araikes

Forgot the --nv flag.... trying again

araikes avatar Feb 15 '24 22:02 araikes

My python terminal was killed without an error message and now I get a CUDA_OUT_OF_MEMORY error, despite nothing apparently running on the GPU

araikes avatar Feb 15 '24 23:02 araikes