Octavian.jl icon indicating copy to clipboard operation
Octavian.jl copied to clipboard

Generalize matrix multiply to support `C = f.(A * B + x)`

Open chriselrod opened this issue 4 years ago • 5 comments

This is for the same of implementing dense layers for neural networks.

The reverse pass also needs g.(Cbar) * B' and A' * g.(Cbar), but the fact that we have two instances of g.(Cbar) here means we should probably evaluate g just once per element of Cbar.

However, perhaps somewhat related to https://github.com/JuliaLinearAlgebra/Octavian.jl/issues/40, we should add support for batching matrix operations of different sizes as well -- in particular, the reverse pass of

gCbar = g.(Cbar)
Abar = gCbar * B'
Bbar = A' * gCbar

should perhaps be evaluated with one function call that can minimize the (already low) threading and synchronization overhead.

Ideally, we'd have an API of doing this a little more generically, but it'd help for allocating threads to know that a lot of the array dimension here are the same.

chriselrod avatar Jan 21 '21 22:01 chriselrod

I'm assuming that here g is the adjoint of f?

DilumAluthge avatar Jan 21 '21 22:01 DilumAluthge

Yes, in the example if f === tanh then g(x) = x * muladd(-x, x, 1).

Although of course f and g would be arbitrary (so long as they're defined on numbers), it's just the primary use case and motivation would involve g = f'.

I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (x is the bias vector, normally called b):

C = A * B
@. C = f(C .+ x)

Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.

chriselrod avatar Jan 21 '21 23:01 chriselrod

I think this is a compelling use case for Octavian. While performance is close to MKL at smallish sizes, I think we'd have a nice advantage over the current dense layer implementation which looks like this on the forward pass (x is the bias vector, normally called b):

C = A * B
@. C = f(C .+ x)

Note that the broadcast is single threaded. I imagine we could get a nice performance advantage at the sizes people would actually consider CPU-training over MKL, and definitely over the default OpenBLAS through this fusion and threading the entire operation.

This is really exciting!

DilumAluthge avatar Jan 23 '21 20:01 DilumAluthge

Note that the broadcast is single threaded.

Two questions:

  1. Can we vectorize the broadcast by using @avx?
  2. Would it make sense to try to multi-thread the broadcast?

DilumAluthge avatar Jan 23 '21 20:01 DilumAluthge

Yes and yes.

The goal would be to just launch threads once per pass (i.e., once for forward, and once for back). This would reduce threading overhead, and also let us make sure locality is good / minimize how much data has to be move between cores.

chriselrod avatar Jan 24 '21 21:01 chriselrod