ZeroQ
ZeroQ copied to clipboard
Model remains float32 type after quantization
Hello, thank you for providing the open-source code. I encountered some issues while trying to reproduce. I set up the environment according to your instructions and run the code. However, I found that my model remains float32 type data after quantization (I take weight_bit = 8 and activation_bit = 8). I am not sure where the problem is and would appreciate your help. Thank you again for providing the code, and I look forward to your reply.