compute-engine
compute-engine copied to clipboard
add binary fully connected operator
Binary fully connected operator is in essence doing binary matrix matrix multiplication (BGemm). Assume that the input is M × N , the weight is N×K (M is the batch size,N is the number of neurons of the previous layer, K is the number of neurons of the current layer)
this boils down to implementing a fast binary matrix-vector multiplication
@lgeiger @arashb @Tombana any update on the implementation for the binary dense layers?
@rameshKrSah we haven't made any specific efforts towards implementing a binary dense layer.
I think the most obvious and easiest way we could support binary dense layers would actually not involve adding a new op at all, but instead mapping Larq binary dense layers to an equivalent LCE 1x1 binary convolution. This would be an automated way of doing the penultimate bullet point from this page of our docs.
This wouldn't be particularly fast (though it would hopefully be faster than float dense layers), because our optimised binary convolution kernels operate on 'chunks' of four input pixels at a time, whereas this 'equivalent' convolution here would have only one input pixel. This is, however, something that we could very easily solve once we switch over to using the (currently experimental) indirect bgemm kernels, by adding a micro-kernel that operates on one input pixel at a time.
Hi, there @AdamHillier @Tombana @arashb @lgeiger
This wouldn't be particularly fast (though it would hopefully be faster than float dense layers), because our optimised binary convolution kernels operate on 'chunks' of four input pixels at a time, whereas this 'equivalent' convolution here would have only one input pixel. This is, however, something that we could very easily solve once we switch over to using the (currently experimental) indirect bgemm kernels, by adding a micro-kernel that operates on one input pixel at a time.
It seems that the actual 1x1 binary convolution is 4x slower than its fully optimized version. Is there any guidelines or instructions on how to bridge this gap?