daverim

Results 30 comments of daverim

Hi albertnanda@ thanks for your request -- we are currently actively researching how to support these layers. Unfortunately the hub layer is basically a wrapper for a saved model function...

Currently we are working on this @fredrec has more details

Could you post the tflite model -- it appears pad was not converted with correct type constraints. Could you also try adding the flag ``` converter.experimental_new_quantizer = False ``` Thanks

Hi peri044@ and Jia-HongHenryLee@ I'm looking into it now, but there are a couple of workarounds. First, it seems to save correctly if you use `model.save('export_dir', save_format='h5')` I think this...

Hi sorry for the delay. I just tested your sample code and it seems to be resolved now. There are some warnings about un-traced functions. Using: tf=2.8.0-dev20210930, tfmot=tensorflow_model_optimization=0.7.0 Please try...

Hi, this confusion is probably due to the fact that numpy/python uses round-toward-zero and c++/android uses round-away. In your example, you probably expect `zp = 0`, but it is actually...

Hi FSet89@. Sorry for your inconveniences. I just tried out your code on a very simple convolutional model and seems to be running fine. Are you running in eager mode?...

Just to be clear, you should use the new_quantizer and new_converter with tf nightly for best results. You seem to be running afoul of the NNAPI. You should try running...

Hi @gcunhase if you add the quantize_and_dequantize nodes, you will need to create a tf.function that contains both the original function and the graph and save this. For example if...