keras-lmu
keras-lmu copied to clipboard
Feature - parallel training
Are there any plans to implement training in parallel manner as shown in
https://arxiv.org/pdf/2102.11417.pdf
This has been implemented as keras_lmu.LMUFFT, which will be automatically selected if you use keras_lmu.LMU and satisfy these conditions:
https://github.com/nengo/keras-lmu/blob/ab0775791aa73f9d22780539594ef4bd7de0be25/keras_lmu/layers.py#L398-L403
There is still a bit of support that can be added for the RNN flags in #35 but let us know if this works for your use case.
For now, you might want to look at the implementation here. This is essentially the same as keras_lmu.LMUFFT , with two exceptions: 1) it supports multi dimensional input; and 2) when return_sequences=False, it implements equation (25) from the paper, which is more efficient.
Just a note that the multi dimensional input for LMUFFT is now supported in master.