nobrainer icon indicating copy to clipboard operation
nobrainer copied to clipboard

Transfer learning on FDG-PET scans

Open MatjazBostic opened this issue 2 years ago • 5 comments

I have a task to determine if the patient has a certain brain disease out of FDG-PET scan. Output should be just one neuron which tells if the patient has the disease or not (2 classes). I have two questions about that:

  • Does it make sense to use any of your pre-trained models on a FDG-PET scan (since they were not originally trained on FDG-PET scans)? If so, which model do you think would be the most appropriate?

  • As I can understand, the labels of your models should be other images of the brain with the marked tumors or other parts of the brain. Is it possible to use a model for the classification with two classes?

I have tried the following:

  • loading the model in this way and then changing the input shape to the shape of my inputs. I couldn't manage to change the input shape. My plan was to add after the model another two dense layers with the last layer having only one neuron as an output.
model_path = "../trained-models/neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5"
model = load_model(model_path, compile=False)
  • Here, I got an error when I tried to change the number of classes to 2. I have also tried to do it with 50 classes and after the model add a sequential layer and a dense layer with one neuron. The problem in that case was that I ran out of memory.
model = variational_meshnet(
    n_classes=50, 
    input_shape=(91, 109, 91, 1),
    filters=96, 
    dropout="concrete", 
    receptive_field=37, 
    is_monte_carlo=True)

weights_path = tf.keras.utils.get_file(
    fname="nobrainer_spikeslab_32iso_weights.h5",
    origin="https://dl.dropbox.com/s/rojjoio9jyyfejy/nobrainer_spikeslab_32iso_weights.h5")

model.load_weights(weights_path)

MatjazBostic avatar Mar 02 '22 13:03 MatjazBostic

Probably you need to fine-tune a model based on your input data. To better help you there are some questions here,

  • what is your input data shape? and does it fit into a GPU memory or do you need to divide it into sub-blocks and then feed it to the model?
  • Is there a brain feature that helps the classification (for example lesion or atrophy)?
  • The model you used is variational, which occupy more gpu memory comparing to the non-variational models but it gives an uncertainty value that can be useful in some problems. So do you need a variational model?

Hoda1394 avatar Mar 03 '22 18:03 Hoda1394

@Hoda1394 Thanks for your comment.

  • My input data shape is (91, 109, 91) (no color channels). I don't know if it fits. Before when I was trying with a normal (non-pretrained) CNN with keras I didn't have to manually divide it. I got about 0.95 AUC, but I would like to improve that.
  • I think there is. I am trying to determine if the patient has alzheimer dementia or not (I need confidence as an answer between 0 and 1, not just true/false.
  • I don't really understand the difference between variational and non-variational model. But If I understand correctly, is this "uncertanty" what I mentioned that I need in the previous point (confidence)?

BTW, I am trying that on a Nvidia GTX 1070 Ti

MatjazBostic avatar Mar 03 '22 19:03 MatjazBostic

  • Keras pre-trained models have a specific structure which means they can accept the inputs with shapes they have been trained with. So you need to reshape your inputs to be able to feed them to the model.
  • Bayesian models can give you the uncertainty output but they are harder to train. While fine-tuning gives a good starting point for training.
  • Yes, variational or Bayesian models give you the uncertainty value. but interpreting the uncertainty as the model confidence is a little controversial and it depends on types of uncertainty.
  • If you are running out of memory, I can suggest dividing your input into blocks and then feeding them to the network. but in that case, you need to come to an aggregation method to have one output per your whole brain input at the end for classification.

Hoda1394 avatar Mar 08 '22 19:03 Hoda1394

One more thing, in your last example I see you are getting weights from dropbox rather than the trained_models repository. is this model referenced somewhere?

Hoda1394 avatar Mar 08 '22 19:03 Hoda1394

@Hoda1394 I was looking a bit through the trained_models repository but I am unsure if any of models there would be appropriate for my use case. Or do you think any of them would be? The closest one is probably kwyk but I am a bit unsure how to implement it since it has an image as an output. should I add a convolutional network afterwards to convert from that image to one neuron in the output?

I have found this link to dropbox here: https://github.com/neuronets/nobrainer/blob/master/guide/transfer_learning-bayesian.ipynb

MatjazBostic avatar Mar 10 '22 22:03 MatjazBostic