ML-KWS-for-MCU icon indicating copy to clipboard operation
ML-KWS-for-MCU copied to clipboard

Error run quant_test.py to quantize ds_cnn

Open Haryslee opened this issue 6 years ago • 4 comments

INFO:tensorflow:Confusion Matrix: [[ 0 3124 0 0 0 0 0 0 0 0 0 0] [ 0 3126 0 0 0 0 0 0 0 0 0 0] [ 0 3289 0 0 0 0 0 0 0 0 0 0] [ 0 3209 0 0 0 0 0 0 0 0 0 0] [ 0 3038 0 0 0 0 0 0 0 0 0 0] [ 0 3009 0 0 0 0 0 0 0 0 0 0] [ 0 3026 0 0 0 0 0 0 0 0 0 0] [ 0 3004 0 0 0 0 0 0 0 0 0 0] [ 0 3004 0 0 0 0 0 0 0 0 0 0] [ 0 2976 0 0 0 0 0 0 0 0 0 0] [ 0 3078 0 0 0 0 0 0 0 0 0 0] [ 0 3040 0 0 0 0 0 0 0 0 0 0]] INFO:tensorflow:Training accuracy = 8.47% (N=36923) INFO:tensorflow:set_size=4445 INFO:tensorflow:Confusion Matrix: [[ 0 371 0 0 0 0 0 0 0 0 0 0] [ 0 371 0 0 0 0 0 0 0 0 0 0] [ 0 397 0 0 0 0 0 0 0 0 0 0] [ 0 406 0 0 0 0 0 0 0 0 0 0] [ 0 350 0 0 0 0 0 0 0 0 0 0] [ 0 377 0 0 0 0 0 0 0 0 0 0] [ 0 352 0 0 0 0 0 0 0 0 0 0] [ 0 363 0 0 0 0 0 0 0 0 0 0] [ 0 363 0 0 0 0 0 0 0 0 0 0] [ 0 373 0 0 0 0 0 0 0 0 0 0] [ 0 350 0 0 0 0 0 0 0 0 0 0] [ 0 372 0 0 0 0 0 0 0 0 0 0]] INFO:tensorflow:Validation accuracy = 8.35% (N=4445) INFO:tensorflow:set_size=4890 INFO:tensorflow:Confusion Matrix: [[ 0 408 0 0 0 0 0 0 0 0 0 0] [ 0 408 0 0 0 0 0 0 0 0 0 0] [ 0 419 0 0 0 0 0 0 0 0 0 0] [ 0 405 0 0 0 0 0 0 0 0 0 0] [ 0 425 0 0 0 0 0 0 0 0 0 0] [ 0 406 0 0 0 0 0 0 0 0 0 0] [ 0 412 0 0 0 0 0 0 0 0 0 0] [ 0 396 0 0 0 0 0 0 0 0 0 0] [ 0 396 0 0 0 0 0 0 0 0 0 0] [ 0 402 0 0 0 0 0 0 0 0 0 0] [ 0 411 0 0 0 0 0 0 0 0 0 0] [ 0 402 0 0 0 0 0 0 0 0 0 0]] INFO:tensorflow:Test accuracy = 8.34% (N=4890)

Why? When I changed the set of hyperparameters (window_size, window_stride, etc.), it still did not work. However, there is a right result when quantizing dnn model.

Haryslee avatar Jul 27 '19 10:07 Haryslee

run fold_batchnorm.py, after use "-bnfused" checkpoint to run quant_test.py check "-act_max []" argument in quant_test.py

Don't change any hyper-parameters, training and testing parameters must be same

saichand07 avatar Jul 29 '19 11:07 saichand07

run fold_batchnorm.py, after use "-bnfused" checkpoint to run quant_test.py check "-act_max []" argument in quant_test.py

Don't change any hyper-parameters, training and testing parameters must be same

Thank you a lot! I got it!

Haryslee avatar Jul 29 '19 12:07 Haryslee

I used the fold_batchnorm.py, and after creating the "-bnfused" checkpoint,I ran quant_test.py with it. The accuracy did not change comparing to the time I used the simple checkpoint.(For DNN the accuracy for both was about 84%).Do you know if this is supposed to be like that? or is this wrong?

marjanemd avatar Jul 29 '19 17:07 marjanemd

I used the fold_batchnorm.py, and after creating the "-bnfused" checkpoint,I ran quant_test.py with it. The accuracy did not change comparing to the time I used the simple checkpoint.(For DNN the accuracy for both was about 84%).Do you know if this is supposed to be like that? or is this wrong?

There are not batchnorm layers in DNN model, so it's not meaningful to do that with DNN model.

Haryslee avatar Jul 30 '19 08:07 Haryslee