Jorge Ruiz

Results 3 comments of Jorge Ruiz

@Xcompanygames Consider Using ONNX instead of TF as it's usually faster and more reliable. I'm having a memory leak, but i think is because the inference data stays on memory...

A few months ago I managed to quantize this LSTM model and run it on a Coral Edge TPU https://colab.research.google.com/github/google-coral/tutorials/blob/master/train_lstm_timeseries_ptq_tf2.ipynb The example has been broken since TF 2.7...

You map the 0 to -128 and the 1 to 127, and all the intermediate values are then quantized into that interval. If the network is tightly fitted , quantization...