YaHei

Results 7 comments of YaHei

Hi, @zhouyang2640 I'm sorry that this is only a simulation for quantization to help understand and the converted model can't be hybridize to symbol directly. Try [mxnet quantization](https://github.com/apache/incubator-mxnet/tree/master/example/quantization) if you...

Hi, @zhouyang2640 Oh..yes, you're right. I've only concerned the situation that quantize aware training for pretrained models before. To do quantization aware training from scratch, maybe you can construct your...

Hi, @zhouyang2640 Is the paper [Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](https://arxiv.org/abs/1712.05877) that you mentioned? There is only little difference between simulation and reality in theory --...

Hi, @zhouyang2640 1. Note that you should bypass BatchNorm when you use `fake_bn`, or the model will do BatchNormalization twice and it's harmful for training. 2. `fake_bn` would do much...

Hi, @zhouyang2640 Is fused batchnorm that you mentioned above equivalent to fake bn? In my concept, **fake bn** is that do batch-nomalization in convolution layer(both do batch-normalization and do convolution)...

不好意思,时隔一个多月今天才看到这个issue。经过排查发现是input的量化有问题,先前都只考虑了relu+conv的情况,没有考虑负值inputs,现在已经更新了代码。但是, 1. 目前还无法为mbv2使用KL校准,主要原因是np.bincount不支持负值,目前还没来得及解决这个问题,后续会处理一下; 2. mbv2在在线量化的表现还行,但naive校准的离线量化效果不佳,可能是input的数值分布差异比较大导致的。 ----------- > $ python simulate_quantization.py --model=mobilenetv2_1.0 --use-gpu=1 --merge-bn --quant-type=channel --input-signed=true > > ************************* Settings ************************* > model : mobilenetv2_1.0 > print_model : False > list_models...

你好, 麻烦重新clone最新的代码再试试。 在早前的版本中count_ops是接收一个input_shape的,但后来改成接收一个具体的ndarray,你这里的报错很可能是使用了早前的源码和最新的tests代码。