hahaha

Results 5 issues of hahaha

if it is causal, it should not use "gLN". But there‘s no constraint in your code. And cLN should calculate the cumulative mean & var over time steps.

I convert the tflite model to int8 by tensorflow official doc, but it raises this error when i convert it to onnx.

bug

use quantize_model interface: ![image](https://user-images.githubusercontent.com/19814680/232799979-c3c83382-78bb-4dcf-bf9b-9abb6216cc4e.png) original convert interface: ![image](https://user-images.githubusercontent.com/19814680/232800108-0a06b90c-1d80-47e9-a29d-0bbc2f95b002.png)

bug

`2023-12-18 17:40:54.489013950 [E:onnxruntime:, sequential_executor.cc:494 ExecuteKernel] Non-zero status code returned while running QLinearMatMul node. Name:'/model/out_layer/out_layer/OutLinear/MatMul' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/quantization/quantize_linear_matmul.cc:55 virtual onnxruntime::common::Status onnxruntime::QLinearMatMul::Compute(onnxruntime::OpKernelContext*) const IsBQuantParamSupported(b_offset->Shape(), b ? b->Shape() : b_shape_) was false. QLinearMatmul...