model-optimization
model-optimization copied to clipboard
Mismatch in number of weights when loading quantized model (activation layer)
Describe the bug
Saving and subsequently loading a quantized model results in the following error:
Traceback (most recent call last):
File "test.py", line 37, in <module>
model = tf.keras.models.load_model('MinimalExample.h5')
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/save.py", line 182, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 181, in load_model_from_hdf5
load_weights_from_hdf5_group(f['model_weights'], model.layers)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 706, in load_weights_from_hdf5_group
str(len(weight_values)) + ' elements.')
ValueError: Layer #121 (named "quant_y_a_relu" in the current model) was found to correspond to layer quant_y_a_relu in the save file. However the new layer quant_y_a_relu expects 3 weights, but the saved weights have 1 elements.
The error can be reproduced using this code (test.py):
Please note that there is no error when setting quantize_model = False
import tensorflow as tf
import tensorflow_model_optimization as tfmot
from tensorflow_model_optimization.python.core.quantization.keras.default_8bit import default_8bit_quantize_configs
quantize_model = True
# Build model
base_model = tf.keras.applications.MobileNetV2(input_shape=(480,640,3),classes=2,weights='imagenet',include_top=False)
x = base_model.get_layer('block_12_add').output
y_a = tf.keras.layers.Conv2D(256,1,padding='same',dilation_rate=1,use_bias=False,kernel_initializer='he_normal',name='y_a_conv2d')(x)
y_a = tf.keras.layers.BatchNormalization(name='y_a_bn')(y_a)
y_a = tf.keras.layers.Activation('relu',name='y_a_relu')(y_a)
y_b = tf.keras.layers.Conv2D(256,3,padding='same',dilation_rate=6,use_bias=False,kernel_initializer='he_normal',name='y_b_conv2d')(x)
y_b = tf.keras.layers.BatchNormalization(name='y_b_bn')(y_b)
y_b = tf.keras.layers.Activation('relu',name='y_b_relu')(y_b)
output_tensor = tf.keras.layers.Concatenate(name='aspp_concat')([y_a,y_b])
model = tf.keras.models.Model(inputs=base_model.input,outputs=output_tensor,name='MinimalExample')
# Save model
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metrics = ['accuracy']
if quantize_model:
with tf.keras.utils.custom_object_scope({'NoOpQuantizeConfig':default_8bit_quantize_configs.NoOpQuantizeConfig}):
model = tfmot.quantization.keras.quantize_model(model)
model.compile(optimizer=optimizer,loss=loss,metrics=metrics)
model.save('MinimalExample.h5')
del model
# Load model
if quantize_model:
with tfmot.quantization.keras.quantize_scope():
model = tf.keras.models.load_model('MinimalExample.h5')
else:
model = tf.keras.models.load_model('MinimalExample.h5')
# Convert model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_model = converter.convert()
# Export model
open('MinimalExample.tflite','wb').write(tflite_model)
System information
TensorFlow version (installed from source or binary): 2.3.0 (Docker image)
TensorFlow Model Optimization version (installed from source or binary): 0.4.1 (Docker image)
Python version: 3.6.9 (Docker image)
Describe the expected behavior Code should not crash
Describe the current behavior Code crashes
Hi @Lotte1990 , sorry for late response. Want to check if this still bugs you, before taking a look into this.
@teijeong @Xhark Yes, I can confirm this is still an issue using tf-nightly (2.6.0.dev20210418) and tensorflow-model-optimization 0.5.0. Please look into this issue.
@teijeong @Xhark Any updates on this?
I am still having that issue. Is there any progress on it? Did anyone maybe find some workaround for the time being?
This issue is still bothering me. Please look into this.
@Lotte1990 same issue for me. did you find any workaround yet?
@mrj-taffy Unfortunately not. Let's hope it will be fixed soon. Perhaps @Xhark could give an update on the situation...
quant_model.load_weights(model_path)
@WillLiGitHub What do you mean? Could you explain a bit more?