How to represent +1 and -1 in one bit?
As mentioned in the BinaryNet paper, we can use the XNOR-Bitcounting operation. It seems to be reasonable, but when we really try to realize this in one bit, how should we represent -1? Should we convert it into 0? And that seems to meet some problems when we try to xnor it with the input 8-bit vector, since mostly a negative number will be in 2-complement form.. I might be wrong, just a little bit confused since I was wondering how to achieve this on hardware...
It seems to be reasonable, but when we really try to realize this in one bit, how should we represent -1? Should we convert it into 0?
In practice, we use 0s to represent the -1s, XOR-popcount-adds to compute the weighted sums, and finally we adjust these weighted sums so that we get the same result as a dot-product (provided the operands are constrained to (-1,1)).
And that seems to meet some problems when we try to xnor it with the input 8-bit vector, since mostly a negative number will be in 2-complement form..
We explain in the paper (Section 1.6) how to handle 8-bit inputs as shifted sums of binary inputs.
Thanks for the reply! :) Is it because the input pixel value is an 8bit uint number that we can use the XOR-popcount-adds method? It does get a right answer with unsigned input vector, but how about a vector of signed numbers (maybe we do not need to consider such situation?)?
And what if one of the weight vector is all 1s? Overflowed? In case of that, we should take a (8+log2(vectorlength))bit number to represent the output?