trained-models icon indicating copy to clipboard operation
trained-models copied to clipboard

Transfer learning from braingen model

Open shmann opened this issue 2 years ago • 8 comments

Is it possible to use the braingen model for transfer learning for a segmentation problem, similar to how brainy was used for transfer learning in the AMS paper?

shmann avatar Apr 06 '22 21:04 shmann

i would suggest using brainy. braingen is a GAN, and it would require a very different kind of network architecture and paradigm to be used for segmentation. it would also depend on the quality of underlying data you want to segment.

satra avatar Apr 10 '22 13:04 satra

I was thinking the encoder part could be used to encode MRI images to then use for downstream tasks

shmann avatar Apr 13 '22 01:04 shmann

so the downstream task in this case would be segmentation ? for encoding mri images, apart from braingen you can also try out the vae model or the siamese representation learning (simsiam model) too

dhritimandas avatar Apr 13 '22 02:04 dhritimandas

@dhritimandas --

Right. Specifically, I'm playing with a similar type of segmentation as in the AMS paper--in other words, segmentation that is inconsistent spatially (as opposed to a particular brain region, or something that can be defined with a template). Also, I'm interested in models that were pre-trained with T1 brain MRI data, because my dataset is quite small (that's what lead me to the brainy model).

Regarding your suggestions, are they in this repository? I've been poking around, but I don't recall seeing those names come up?

shmann avatar Apr 13 '22 04:04 shmann

@shmann - the brainy model is here: https://github.com/neuronets/trained-models/releases/tag/0.1. it is available in keras hdf5 format.

@satra @dhritimandas - a good encoder would be useful, and we could use self-supervision to train it. i saw a self-supervised learning repo pop up in the neuronets org, so i assume you are working towards that.

i'm also working on self-supervised learning in histology images, and two training methods i am trying are

  • masked auto encoder https://arxiv.org/abs/2111.06377
  • dino https://github.com/facebookresearch/dino

kaczmarj avatar Apr 13 '22 12:04 kaczmarj

@shmann the models can be found here - https://github.com/neuronets/nobrainer/tree/master/nobrainer/models this includes autoencoders / progressive ae ; brainsiam (which is the siamese network); dcgan; highresnet among others.

pretrained models are here - https://github.com/neuronets/trained-models

please let us know if you have any further questions regarding these.

@kaczmarj : keras has an implementation of masked autoencoders (https://keras.io/examples/vision/masked_image_modeling/) though I do not think it works that well even on natural images. how is it turning out for you ?

dino is the next step i am exploring after simsiam. happy to discuss this with you.

dhritimandas avatar Apr 13 '22 21:04 dhritimandas

@dhritimandas - i am using huggingface for the masked autoencoder. https://huggingface.co/docs/transformers/model_doc/vit_mae

kaczmarj avatar Apr 13 '22 21:04 kaczmarj

@kaczmarj I tried fine tuning the brainy model (up to 200 epochs), but it's still putting out a decent amount of noise. I might try incorporating the 'largest-label' concept (as offered in the nobrainer cli). However, I am wondering, was such an approach used for AMS? Or was the AMS model able to learn to predict a single label without culling noise in that way?

shmann avatar Apr 14 '22 02:04 shmann