3d-mri-brain-tumor-segmentation-using-autoencoder-regularization
3d-mri-brain-tumor-segmentation-using-autoencoder-regularization copied to clipboard
Ran for 200 epochs batch size 32 on brats18
I noticed that I couldn't get the collab to work due to a tensorflow error however I tried to run it locally and was able to modify the notebook to run it as a python script.
I initially ran with default setting and this was my output: https://gist.github.com/CraigMyles/12800936b55830d92aaf6a4b7bbb913e
I then ran with 200 epochs and batch size 32. and I got the following results: https://gist.github.com/CraigMyles/f69392cba910accacbd45fc378a4474f
Epoch 200/200 4/4 [==============================] - 51s 13s/step - loss: 0.0225 - Dec_GT_Output_loss: 0.0000e+00 - Dec_VAE_Output_loss: 0.0225 - Dec_GT_Output_dice_coefficient: 0.0000e+00 - Dec_VAE_Output_dice_coefficient: 0.8982
I just have a few questions regarding the model and how it works, I noticed that with other segmentation models, once you have trained it, you have a weighted model which can be used against a testing set however I don't see this to be the case here?
Also I was wondering if possible how I would be able to get results in the form of jpg or png images that I would be able to turn into a gif.
Any advice or explanation would be great
Hello @CraigMyles, This script was written before TensorFlow2.0 was released. I am not sure how things work now, but since the model itself is designed in Keras, you should be able to do:
model.save('model.h5')
to save the model. To later load the model:
from keras.models import load_model
model = load_model('model.h5')
To get predictions on a test image, you can use
model.predict(img)
Where img
is a numpy array. You can save with cv2.imwrite
.
FYI:
Hello @CraigMyles, This script was written before TensorFlow2.0 was released. I am not sure how things work now, but since the model itself is designed in Keras, you should be able to do:
model.save('model.h5')
to save the model. To later load the model:
from keras.models import load_model model = load_model('model.h5')
This will not work due to custom loss function. I had to save weights and rebuild the model. Let me know if there is a better way.
This works actually. At least it used to in TensorFlow 1.x.
I forgot to account for the custom loss function. My bad. Anyway, this is how you do it. You save the model as described above. After that, to load the model, you load_model
with a custom_objects
argument where you pass in all the custom things.
You can read more about it here (Check the 4th point, aptly titled: Handling custom layers (or other custom objects) in saved models)
Thanks.
Here's what I tried: from tensorflow.keras.models import load_model from model import loss_gt, loss_VAE from group_norm import GroupNormalization model2 = load_model(model_name, custom_objects={'loss_gt': loss_gt, 'loss_VAE': loss_VAE, 'GroupNormalization': GroupNormalization})
Get hung up on the following error:
AttributeError Traceback (most recent call last)
8 frames /tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/saving/save.py in load_model(filepath, custom_objects, compile) 141 if (h5py is not None and ( 142 isinstance(filepath, h5py.File) or h5py.is_hdf5(filepath))): --> 143 return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) 144 145 if isinstance(filepath, six.string_types):
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/saving/hdf5_format.py in load_model_from_hdf5(filepath, custom_objects, compile) 160 model_config = json.loads(model_config.decode('utf-8')) 161 model = model_config_lib.model_from_config(model_config, --> 162 custom_objects=custom_objects) 163 164 # set weights
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/saving/model_config.py in model_from_config(config, custom_objects)
53 'Sequential.from_config(config)
?')
54 from tensorflow.python.keras.layers import deserialize # pylint: disable=g-import-not-at-top
---> 55 return deserialize(config, custom_objects=custom_objects)
56
57
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/layers/serialization.py in deserialize(config, custom_objects) 103 module_objects=globs, 104 custom_objects=custom_objects, --> 105 printable_module_name='layer')
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 189 custom_objects=dict( 190 list(_GLOBAL_CUSTOM_OBJECTS.items()) + --> 191 list(custom_objects.items()))) 192 with CustomObjectScope(custom_objects): 193 return cls.from_config(cls_config)
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/engine/network.py in from_config(cls, config, custom_objects) 1079 if layer in unprocessed_nodes: 1080 for node_data in unprocessed_nodes.pop(layer): -> 1081 process_node(layer, node_data) 1082 1083 name = config.get('name')
/tensorflow-1.15.2/python3.6/tensorflow_core/python/keras/engine/network.py in process_node(layer, node_data) 1037 if not isinstance(input_tensors, dict) and len(flat_input_tensors) == 1: 1038 input_tensors = flat_input_tensors[0] -> 1039 layer(input_tensors, **kwargs) 1040 1041 def process_layer(layer_data):
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in call(self, inputs, **kwargs) 473 474 # Handle mask propagation. --> 475 previous_mask = _collect_previous_mask(inputs) 476 user_kwargs = kwargs.copy() 477 if not is_all_none(previous_mask):
/usr/local/lib/python3.6/dist-packages/keras/engine/base_layer.py in _collect_previous_mask(input_tensors) 1439 inbound_layer, node_index, tensor_index = x._keras_history 1440 node = inbound_layer._inbound_nodes[node_index] -> 1441 mask = node.output_masks[tensor_index] 1442 masks.append(mask) 1443 else:
AttributeError: 'Node' object has no attribute 'output_masks'
Hmm, weird. Haven't seen that error before. Will try to look into it. In the meantime, if you manage to solve it, please do share the solution here.