AD_Prediction icon indicating copy to clipboard operation
AD_Prediction copied to clipboard

Convert three 2D MRI slices into a RGB image for transfer learning

Open anbai106 opened this issue 5 years ago • 7 comments

Hi,

Thanks very much for your work.

I have a question about how you convert slices into a RGB image.

To constructed a RGB color image, we concatenate the key position slice, one slice one index before the key position, and one slice one index after the key position, into R, G, and B channels respectively(Figure 3b).

I understand that the pertrained models like AlexNet was trained based on ImageNet RGB images. That's reasonable to transfer the MRI slices into a RGB to facilitate the training. One question is that did you normalize the image intensity to 0-255? We know that the intensity of MRI image ranges from a much border ranges than the RGB image. If you simply put the 3 slices into the R, G, B channels; the resulting images mean nothing to be a MRI brain image (For some slices, it works like you showed in your repo, but for other slices, it is no longer a brain)… So I wander how you get the synthesized 2D RGB image, did you always get the good images??

Thanks very much in advance

Hao

anbai106 avatar Nov 10 '18 10:11 anbai106

Hi, thanks for your interest in our project. I just remember we normalize our image to the range [0,1] to perform tensor computation in PyTorch. We checked several images, and we think the images are good to work on (at least the subset we checked)

wangyirui avatar Nov 11 '18 19:11 wangyirui

@wangyirui I visualize some example images in tensorboard, I found that some converted images do not make sense to be an MRI brain image… It is not always so beautiful like you showed in Figure 3b. It is pity that I can not put some of these images here on github… Would you mind to give me your email so that I can send you some bad images ?

Best

anbai106 avatar Nov 11 '18 20:11 anbai106

So your question is: after constructing a 'RGB' image, you want to visualize it, right? What means "do not make sense"? The color or the alignment?

To form a RGB image, we can directly copy one slice 3 times. But we want to integrate more adjacent information, that's why we using 3 different slices to construct the RGB one. It may become strange for some images, I think. But if you only consider the images as some signals, even if the visualization is not so reasonable, it may also works.

You can send the images to [email protected] if it is a legal action.

wangyirui avatar Nov 11 '18 20:11 wangyirui

@wangyirui I will send you the images tomorrow morning with Europe time :)

I understand your point, if we just care about the accuracy of a classification task, that is OK even if the image has been kind of "distorted" to be no longer a brain image. But for the doctors (like neurologist), it will be super strange that we fit no longer an "brain" (at least ) into the CNN. Another inconvenience is that we can not even get the activation map, at the sense of a brain, to understand what really the network has learnt at each layer.

Best

anbai106 avatar Nov 11 '18 20:11 anbai106

Hello, I would like to ask. Is this sliced and converted into three channels and sent to a network for training for average? thanks

zfy514 avatar Jul 10 '21 01:07 zfy514

Hello, I would like to ask. Is this sliced and converted into three channels and sent to a network for training for average? thanks

Thanks for your question. The three slices are integrated as a single pseudo-RGB image as the input, and only ONE probability is predicted.

wangyirui avatar Jul 10 '21 12:07 wangyirui

The three slices are integrated as a single pseudo-RGB image as the input, and only ONE probability is predicted. My understanding is this is to combine three slices into one RGB image and input it into the network. Is that true? Thank you very much.

zfy514 avatar Jul 12 '21 02:07 zfy514