pytorch-binary-converter icon indicating copy to clipboard operation
pytorch-binary-converter copied to clipboard

In float2bit, values of 0 are throwing an out of bounds bug

Open sebastienwood opened this issue 5 years ago • 2 comments

Hi,

when sending a tensor from a Relu activation function to float2bit, f can take the value 0. Then, the log2 will return -inf with all the complications it ensues (out of bound error for the gather function).

One quick fix is to add a small constant to the tensor f such that it won't change the e_scientific value.

Thanks for this library, been very helpful ! :)

sebastienwood avatar Oct 29 '19 22:10 sebastienwood

a small constant means?

joeliuz6 avatar Jun 22 '22 12:06 joeliuz6

Sorry, I'm in the same situation where an unexpected error is generated when entering 0 for the floating-point type

wangxyustc avatar Jul 14 '22 13:07 wangxyustc