compute-engine icon indicating copy to clipboard operation
compute-engine copied to clipboard

add binary fully connected operator

Open arashb opened this issue 6 years ago • 4 comments

Binary fully connected operator is in essence doing binary matrix matrix multiplication (BGemm). Assume that the input is M × N , the weight is N×K (M is the batch size,N is the number of neurons of the previous layer, K is the number of neurons of the current layer)

arashb avatar Oct 07 '19 16:10 arashb

this boils down to implementing a fast binary matrix-vector multiplication

arashb avatar Nov 11 '19 14:11 arashb

@lgeiger @arashb @Tombana any update on the implementation for the binary dense layers?

rameshKrSah avatar Jun 23 '21 14:06 rameshKrSah

@rameshKrSah we haven't made any specific efforts towards implementing a binary dense layer.

I think the most obvious and easiest way we could support binary dense layers would actually not involve adding a new op at all, but instead mapping Larq binary dense layers to an equivalent LCE 1x1 binary convolution. This would be an automated way of doing the penultimate bullet point from this page of our docs.

This wouldn't be particularly fast (though it would hopefully be faster than float dense layers), because our optimised binary convolution kernels operate on 'chunks' of four input pixels at a time, whereas this 'equivalent' convolution here would have only one input pixel. This is, however, something that we could very easily solve once we switch over to using the (currently experimental) indirect bgemm kernels, by adding a micro-kernel that operates on one input pixel at a time.

AdamHillier avatar Jun 23 '21 15:06 AdamHillier

Hi, there @AdamHillier @Tombana @arashb @lgeiger

This wouldn't be particularly fast (though it would hopefully be faster than float dense layers), because our optimised binary convolution kernels operate on 'chunks' of four input pixels at a time, whereas this 'equivalent' convolution here would have only one input pixel. This is, however, something that we could very easily solve once we switch over to using the (currently experimental) indirect bgemm kernels, by adding a micro-kernel that operates on one input pixel at a time.

It seems that the actual 1x1 binary convolution is 4x slower than its fully optimized version. Is there any guidelines or instructions on how to bridge this gap?

bywmm avatar Oct 25 '23 16:10 bywmm