Igor Margulis
Results
1
comments of
Igor Margulis
@anujnayyar1, you can easily fix this issue by updating `from_float `methods of the corresponding layers, implemented in torch-int package. For example for `W8A8B8O8Linear`: ``` int8_module.weight = torch.reshape(int8_weight,int8_module.weight.shape) int8_module.bias = torch.reshape(int8_bias,...