ML-KWS-for-MCU icon indicating copy to clipboard operation
ML-KWS-for-MCU copied to clipboard

act_max parameter for ds-cnn model

Open leiming0225 opened this issue 5 years ago • 3 comments

@navsuda another issue is, when I run python quant_test.py --act_max 64 0 0 0 0 0 0 0 0 0 0 0 .... for ds-cnn model, I can get a training accuracy which is very close to the training accuracy output by test.py. but when I try to quantizing the second parameter by command: python quant_test.py --act_max 64 x 0 0 0 0 0 0 0 0 0 0 I tried replace x with 128,64,32,16,8,4,2,1 all output training accuracy is much lower than when x==0 could you please give some idea for this case? thanks

leiming0225 avatar Nov 08 '18 01:11 leiming0225

@leiming0225 I've had the same problem. how can I solve this problem?

ccnankai avatar Nov 26 '18 03:11 ccnankai

In fact we quantize layers with maximum levels which are only powers of 2, and there is always some quantization errors in between, but according to my experience (and maybe according to literature, because I saw a proof somewhere )you can approach to the same accuracy by quantizing other layers, ,meaning that going on for other layers, you will face a situation where you improve the accuracy (distributed in training , validation and test). So losing a bit of accuracy by quantizing the second layer doesn't mean you will lose more in other layers, you may get to an improvement if you try to find an optimum accuracy in the quantization procedure. It is as if you are training the model again but this time manually, so accuracy fluctuations are inevitable, find the right gradient incrementally by your brain. :-) @leiming0225 @ccnankai @ccnankai Don't forget that it is not all, then you need some other calculations manually to use these levels in implementation, something like DSP arithmetic. examples are provided in quantization Readme file.

pooyaww avatar Nov 26 '18 11:11 pooyaww

@navsuda another issue is, when I run python quant_test.py --act_max 64 0 0 0 0 0 0 0 0 0 0 0 .... for ds-cnn model, I can get a training accuracy which is very close to the training accuracy output by test.py. but when I try to quantizing the second parameter by command: python quant_test.py --act_max 64 x 0 0 0 0 0 0 0 0 0 0 I tried replace x with 128,64,32,16,8,4,2,1 all output training accuracy is much lower than when x==0 could you please give some idea for this case? thanks

I've had the same problem. how can I solve this problem?

wayne175 avatar Apr 14 '19 09:04 wayne175