navsuda
navsuda
@jeremyholleman It failed for me too, but works fine with an older commit (1d777eeeba686ba8e1f3e96c1f9a9396ae17ac1f) of CMSIS_5 where there is no [TransformFunctions.c](https://github.com/ARM-software/CMSIS_5/tree/1d777eeeba686ba8e1f3e96c1f9a9396ae17ac1f/CMSIS/DSP/Source/TransformFunctions).
@pooyaww, The only possible explanation for this discrepancy I can think of is that you have used different parameters for `--window_size_ms` and `--window_stride_ms` during training vs. testing. Please make sure...
@xingdonw, One thing to check is: did the accuracy degrade after fusing the batch-norm layers to the preceding convolution layers as mentioned in [the guide](https://github.com/ARM-software/ML-KWS-for-MCU/blob/master/Deployment/Quant_guide.md#fusing-batch-norm-layers)?
@mansiag05, It would be simple mean subtraction (i.e. 1). If you use mean.binaryproto (e.g. [here](https://github.com/ARM-software/ML-examples/blob/master/cmsisnn-cifar10/models/cifar10_m4_train_test.prototxt#L11)), that would be pixel-wise subtraction (i.e. image_data[i]-mean_data[i]). If you use channel-wise mean transform (e.g. [here](https://github.com/BVLC/caffe/blob/master/models/bvlc_reference_caffenet/train_val.prototxt#L16),...