keras
keras copied to clipboard
[Feature Request] Add cuDNN-accelerated LSTM and GRU to PyTorch
Hi,
are there any plans to add cuDNN-accelerated versions of LSTM and GRU to the PyTorch backend? Without cuDNN acceleration, the LSTM and GRU are considerably (several times) slower, even when running on GPU; however, we still use RNNs heavily (for example, adding them after Transformer encoder still helps in some cases).
The torch.nn.LSTM/torch.nn.GRU offer cuDNN acceleration, and wrapping them to a keras.layers.Layer works, but the resulting model is not backend-agnostic (so the resulting model cannot be used cross-frameworks).
Thanks for consideration :pray: and cheers!
PS: Relatedly, torch.nn.LSTM/GRU offers bidirectional computation by a single call (by passing bidirectional=True) -- I am not sure how much faster it is compared to two asynchronous unidirectional computations, but if it is faster, keras.layers.Bidirectional would probably have to be updated to handle keras.layers.LSTM and keras.layers.GRU specifically to support it.
@foxik Thanks for the issue! Would you like to contribute this by modifying the following file? https://github.com/keras-team/keras/blob/master/keras/backend/torch/rnn.py#L377C1-L382C30
@haifeng-jin I am not sure I can do it correctly. I assume that
- The
cudnn_okwill probably need to consider also the current device (whether it is cuda or not) [in addition to verifying that the arguments are supported by CuDNN implementation)- that may require changes in the code, because
cudnn_okis currently being called only for tensorflow backend; on the other hand, it is used only to setsupports_jitto False, which is probably not needed for PyTorch, because the sources indicate that TorchScript can compiletorch.nn.LSTM/GRU.
- that may require changes in the code, because
- the
torch.nn.LSTM/GRUis a whole layer including parameters, but we need to use the given parameters. Therefore, we should probably calltorch._VF.lstm/gru, but I am not sure whether that would be considered OK - nontrivial care must be taken to assure the results of CuDNN branch are the same as the usual branch
- the
go_backwardshas no direct analogue in Torch API, so some manual reversing will be needed - on the other hand, bidirectional run is supported by Torch API, so
- similarly to backend.lstm/gru, backend.lstm/gru_bidirectional should be introduced
- Bidirectional wrapper should try calling this lstm/gpu_bidirectional to use CuDNN-accelerated bidirectional call (only PyTorch would implement this method)
- With this support in place, the
go_backwardswould not be used in PyTorch for most usages, so it would not matter its implementation would not be great
In any case, for the time being I unfortunately do not have time to work on this.
This feature would be great indeed. Hopefully someone high capable will attend to this sometime soon.