peterjc123
peterjc123
Ex > I've put on my google drive and shared with you. Thanks. I will take a look tomorrow.
@BmanClark Hi, I've put up a fix to eliminate the zero bias tensors for the DeConv ops. https://github.com/alibaba/TinyNeuralNetwork/pull/263 But I'm not sure if group deconv is supported in TFLite. If...
Update: looks like group deconvolution is not supported, at least in XNNPACK delegate. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/delegates/xnnpack/xnnpack_delegate.cc#L6213 > Tinkering further I can get around the problem by specifying `group_conv_rewrite=True`. This splits each of...
> I'm not actually looking to target XNNPACK ultimately (although I would like it as a reference), but I've created an issue: [tensorflow/tensorflow#62181](https://github.com/tensorflow/tensorflow/issues/62181) Thanks for bias fix, I'll look to...
@BmanClark I have commented on that issue. Glad you solved it the other way.
These tests are skipped in https://github.com/alibaba/TinyNeuralNetwork/pull/304.
> 以及钉钉群的二维码过期吗,能否麻烦更新一下 二维码已经更新了
> 模型在int8 PTQ后精度下降很多,请问现在支持int16 activation吗,谢谢 Int16量化我们是支持的(详见 https://github.com/alibaba/TinyNeuralNetwork/issues/47 ),但是一般来说后端的实现还不太成熟,可能导致模型推理要比int8慢很多,一般来说还是建议进一步分析是哪些层导致量化的损失,然后在转换时跳过这些层的量化。
https://github.com/alibaba/TinyNeuralNetwork/blob/main/docs/FAQ.md#what-if-duplicate-tensors-is-generated-in-the-tflite-model-eg-when-performing-static-quantization-for-lstms
Q: What if duplicate tensors is generated in the TFLite model (e.g. when performing static quantization for LSTMs)? A: You may try out group_tensors=True to remove those duplicates.