Basic_CNNs_TensorFlow2
Basic_CNNs_TensorFlow2 copied to clipboard
which is the version of tensorflow in this project?
pip insatll tensorflow==2.0.0 or pip install tensorflow==2.0.0-beta1,tensorflow2.0.0or tensorflow2.0.0-beta1,but when I run train.py, the error of "ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development" appears,why?How to do?
Tensorflow >= 2.0.0
When runing the line of converting to tflite in train.py,"the error of "ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development"appears,I can't convert to tflite,the version of tensorflow is TF2.0,why?You can convert to tflite?Any other ways to convert to tflite?Thanks!
@xieshenru first, train your model:
# save model
tf.keras.models.save_model(model=model, filepath=cfg.H5_MODEL_PATH, save_format='h5')
then convert tflite model
import tensorflow as tf
import config as cfg
import reader
# load h5 model
model = tf.keras.models.load_model(cfg.H5_MODEL_PATH)
# convert tflite model
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(cfg.TFLITE_MODEL_FILE, 'wb').write(tflite_model)
print('saved tflite model!')
wen I use "tf.keras.models.save_model(model=model, filepath=cfg.H5_MODEL_PATH, save_format='h5') to save model, the error of "NotImplementedError: Saving the model to HDF5 format requires the model to be a Functional model or a Sequential model. It does not work for subclassed models, because such models are defined via the body of a Python method, which isn't safely serializable. Consider saving to the Tensorflow SavedModel format (by setting save_format="tf") or using save_weights."How to solve this problem?Thanks for your help!
@xieshenru Are you traning SSD mode? I used those codes to converter mobilenetV2 model.
@xieshenru You can try this code, when you train model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open('model/model.tflite', 'wb').write(tflite_model)
print('saved tflite model!')
I used those codes to convert ShuffleNetV2 and mobilenetV2 model for Classification problem,use the code of train.py ,I can't convert to tflit.Here is the code used in your "train.py"
tf.saved_model.save(model, save_model_dir) #convert to tensorflow lite format converter = tf.lite.TFLiteConverter.from_saved_model(save_model_dir) tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model)
Can you convert shuffleNetv2 or mobilenetv2 to tflite by using the code of "train.py"?
@xieshenru I can save the tflite model in the train code I wrote.
You can try to use the following code to convert to tflite.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(TFLite_model_dir, "wb").write(tflite_model)
I will update the code in the next few days.
@calmisential Can you answer this issue? https://github.com/calmisential/TensorFlow2.0_SSD/issues/8
@calmisential Can you answer this issue? calmisential/TensorFlow2.0_SSD#8
It will take some time to find a solution, and I'm working on it.
pip insatll tensorflow==2.0.0 or pip install tensorflow==2.0.0-beta1,tensorflow2.0.0or tensorflow2.0.0-beta1,but when I run train.py, the error of "ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development" appears,why?How to do?
I have updated the version of tensorflow to 2.1.0, the issue has been resolved in the latest project code.
@calmisential @yeyupiaoling Thanks for your help! I can try to use the following code from you to convert to lite .
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open(TFLite_model_dir, "wb").write(tflite_model)
but when I use the mode of model.tflite by python to predict the image which be input,the error of "RuntimeError: tensorflow/lite/kernels/transpose.cc Transpose op only supports 1D-4D input arrays.Node number 9 (TRANSPOSE) failed to prepare." in the line of "interpreter.allocate_tensors()". The following code is which I use.
interpreter = tf.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
full_path = os.path.join(PATH_TEST_IMAGES, filename)
img = cv2.imread(full_path)
img = cv2.resize(img, (160, 160))
image_np_expanded = np.expand_dims(img, axis=0)
image_np_expanded = image_np_expanded.astype('float32')
interpreter.set_tensor(input_details[0]['index'], image_np_expanded)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
I don't know why this problem occurs, do you know how to solve it?Thanks!