XNNPACK
XNNPACK copied to clipboard
will you Plan to support int8 perchannel quantize for linear op?
Yes, but its not high priority right now. You're welcome to contribute it!
By linear do you mean QD8 GEMM with linear instead of minmax?