tflite2tensorflow icon indicating copy to clipboard operation
tflite2tensorflow copied to clipboard

The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

Open peiwenhuang27 opened this issue 4 years ago • 3 comments

OS you are using: MacOS 11.4

Version of TensorFlow: v2.5.0

Environment: Docker

Under tf 2.5.0, I converted my pre-trained model from saved_model to tflite.

Afterwards, in Docker container, when I was converting this tflite model to pb format using tflite2tensorflow, the following error occured:

ERROR: The UNIDIRECTIONAL_SEQUENCE_LSTM layer is not yet implemented.

(In this experiment, I did not perform quantization/optimization, but later on I do plan to use tflite to quantize my model that is to be saved as .tflite, which is why I did not directly convert saved_model to pb)

peiwenhuang27 avatar Jul 14 '21 06:07 peiwenhuang27

In fact, I tried to implement that operation a month ago, but there were not enough samples of the model to create a good conversion program. To the extent possible, can you provide the following resources? The minimum amount of information that you are willing to disclose is fine.

  1. Source code for building the LSTM model.
  2. saved_model
  3. tflite file converted from saved_model

I'm having trouble with TFLite's UNIDIRECTIONAL_SEQUENCE_LSTM because it is very difficult to connect it to TensorFlow's standard operations. Screenshot 2021-07-14 16:56:40

Thank you for your help.

PINTO0309 avatar Jul 14 '21 07:07 PINTO0309

Hi, sorry for the late reply. I have attached a zip file of my models (only initialized, without training) and source code, let me know if there's a problem with it! By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?

Thank you!

resources.zip

peiwenhuang27 avatar Jul 16 '21 04:07 peiwenhuang27

Thank you! I'm very busy with my day job, so I'll examine it carefully when I have time.

By the way, I noticed that Quantize layer from tflite is also not yet implemented. Should I also provide some samples for that as well?

I am aware of this point as well. I do not need to provide resources as I have a large number of samples and I know that I can technically handle it. If you are in a hurry to convert your Quantize layer, you can try the following tool. https://github.com/onnx/tensorflow-onnx

$ python -m tf2onnx.convert \
--opset 11 \
--tflite int8_quantized_tflite_xxxx.tflite \
--output model.onnx \
--dequantize

PINTO0309 avatar Jul 16 '21 04:07 PINTO0309