qnlp
qnlp copied to clipboard
Predictions are wrong after model.load_weights()
Hello @rdisipio Thank you a lot for the work.
I am trying to reproduce your work with the VQC. First, training and val accuracy did not enhance during training. After adapting the learning rate, they did.
# optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-2,
decay_steps=10000,
decay_rate=0.9)
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
Now I have a problem with model.predict(txt_embeddings)
.
After saving the model with model.save_weights(modelname)
, the predictions are not right.
On the other hand, when I train the model and predict classes right after model.evaluate()
, the predictions are right.
The problem is how I load the model and the saved weights. My approach is:
model = make_model_quantum(parameters)
Parameters are equal to the ones used for training.
model.load_weights(modelname) model.predict(txt_embeddings)
Can you help me out?
Regards
Hi,
thanks for looking into this. Indeed, saving and load models is always a pain in the neck. Have you tried to pickle the whole model? Something like
import pickle
with open('model.pkl', 'wb') as f:
pickle.dump(model, f)
Also, if your model is based on TensorFlow, you may want to try model.save("models/quantum_model")
or model.save("models/quantum_model.h5")
or tf.saved_model.save(model, "models/quantum_model")
.
There are probably other ways, but try these first.
Hi @rdisipio ,
Thanks for your answer and sorry for the late reply.
All your suggestions did not help, unfortunately.
However, I found out that the best way for me to save hybrid models remains to save the weights:
model.save_weights(modelname)
You can load the model weights by:
- creating first the same model used for training
- compiling the model
- evaluating the model (I do it, but do not know if necessary)
and finally
model.load_weights(modelname)
Regards