Hyeonsoo Moon

Results 11 comments of Hyeonsoo Moon

did anyone solve this issue? my int8 converted model does not work with the script, ``` python detect.py --weights ./checkpoints/yolov4-tiny-416-int8.tflite --size 416 --model yolov4 --image ./data/kite.jpg --framework tflite --tiny ```...

Dataset catalog guides me perfectly. I will try following this script. Thanks a lot for quick response!

Thanks for the reply. I checked that error has removed when I simply imported tensorflow_addons. However, test performance was weird for the tflite converted model. I saved tflite float32 model...

Thanks for you comments. I solved the conversion issue by simply importing tfa, and successfully got similar accuracy results with appropriate data preprocessing for the inferece.

Can you guys try to apply quantize to dense layer only? I successfully QATed the model with the code below. ```python def apply_quantization_to_dense(layer): if isinstance(layer, tf.keras.layers.Dense): return tfmot.quantization.keras.quantize_annotate_layer(layer) return layer...

I am also wondering about this...

@kizoooh , did you solve this issue? I have a same problem.

Did you set input shapes at argument? I used the command like ``` ./pnnx best.torchscript.pt inputshape=[1,3,640,640] inputshape=[1,3,320,320] ``` and then, ``` ./onnx2ncnn best.onnx model.param model.bin ``` with these commands above,...

> > > > > Did you set input shapes at argument? I used the command like > > > > > ``` > > > > > ./pnnx best.torchscript.pt...

@tucan9389 Thanks for the response. due to the policy, I am not able to share the full code or reproducible colab. But I can share the CustomLayer implementation I guess....