pytorch-binary-converter
pytorch-binary-converter copied to clipboard
In float2bit, values of 0 are throwing an out of bounds bug
Hi,
when sending a tensor from a Relu activation function to float2bit, f can take the value 0. Then, the log2 will return -inf with all the complications it ensues (out of bound error for the gather function).
One quick fix is to add a small constant to the tensor f such that it won't change the e_scientific value.
Thanks for this library, been very helpful ! :)
a small constant means?
Sorry, I'm in the same situation where an unexpected error is generated when entering 0 for the floating-point type