PoseEstimationForMobile icon indicating copy to clipboard operation
PoseEstimationForMobile copied to clipboard

uint8 quantization

Open Jove125 opened this issue 5 years ago • 1 comments

Hello!

Has anyone tried uint8 quantization of this model?

I tried to use this script: tflite_convert --graph_def_file=./model.pb --output_file=./pose-quant.tflite --input_format=TENSORFLOW_GRAPHDEF --output_format=TFLITE --inference_type=QUANTIZED_UINT8 --input_shape="1,96,96,3" --input_array=image --output_array=Convolutional_Pose_Machine/stage_5_out --default_ranges_min=0 --default_ranges_max=6 --mean=0 --std_dev=1

After that I changed android mobile application to work with quantized model (use byte input and output instead float i/o).

Performance has grown by about 1.5-2 times, but points are very unstable (jump back and forth). It seems to me, that main problem is range of output data. It's byte now and should be in range 0-255, but in fact is in range 0~30. Nearest points (heatMapArray) have the same meaning and difficult to choose the most correct - that's why they are "jump".

Anybody have idea how to fix it? - Change range to 0-255, change GaussianBlur parameters, quantize somehow differently or something else?

Jove125 avatar Dec 08 '19 19:12 Jove125

hello, did you solved it? I face the same problem

lzcchl avatar Jun 04 '21 07:06 lzcchl