Stephenzza

Results 3 comments of Stephenzza

![image](https://user-images.githubusercontent.com/40635702/196871398-c70fa223-1e0d-4702-87e3-963cd73603e9.png) I found after quantization the matmul operator max value is too big, which 0~255 can not represent all value.

This operation is torch.bmm() in decoder.py line79 `inst_features = torch.bmm(iam_prob, features.view(B, C, -1).permute(0, 2, 1))` assume input tensor shape (B,3,512,512), inst_features =(B,400,4096)*(B,4096,256), after this matrix multiplication i got the exploded...

Same problem, and I can also ensure the key is correct.