ms2deepscore
ms2deepscore copied to clipboard
add on/off switch for dropout during inference
To apply Monte-Carlo dropout it would be nice to have a simple method to also turn on dropout during inference.
This is needed for #53 and so far I used a ugly workaround in my notebooks (importing weights into a copy of the base network with training=True
forced to be on).
Here, it should be an alternative predict
method in SiameseModel
, probably along those lines: https://github.com/keras-team/keras/issues/9412
OK, the solution from the link above does not work for tensorflow 2.3
. So I went on to https://stackoverflow.com/questions/63238203/how-to-get-intermediate-outputs-in-tf-2-3-eager-with-learning-phase
This gives two options. First this one:
from tensorflow.keras.models import Model
partial_model = Model(model.base.inputs, model.base.layers[-1].output)
embedding = partial_model(input_vector, training=True)
or:
from tensorflow.python.keras.backend import eager_learning_phase_scope
fn = K.function([model.base.input], [model.base.layers[-1].output])
# run in test mode, i.e. 0 means test
with eager_learning_phase_scope(value=0):
output_test = fn(input_vector)
# run in training mode, i.e. 1 means training
with eager_learning_phase_scope(value=1):
output_train = fn(input_vector)
Both things seem to work locally for me. However, I don't fully get it, because I get different results for
embedding1 = partial_model(input_vector, training=False)
embedding2 = model.base.predict(input_vector)
np.all(embedding1 == embedding2) # --> gives False, but I expected same embeddings!
Any ideas on that @svenvanderburg ?
Sorry, I ran part of it again and now that seemed to work fine now (only partial_model(input_vector, training=False)
returns a tf.tensor
).
Cool, I would of course have suggested to run part of it again ;)
Works now in #168