tiny-cuda-nn
tiny-cuda-nn copied to clipboard
Feature request: SiLU nonlinearity
Thanks to the developers for making this exciting effort available!
I'd like to request the inclusion of the SiLU nonlinearity, which should be a very simple modification of the existing Logstic nonlinearity: SiLU is just x * logistic(x), see https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html?highlight=silu#torch.nn.SiLU.
Thanks!
warp_activation & warp_activation_backward_in are straightforward to implement but not sure how to make the warp_activation_backward (in file include/tiny-cuda-nn/common_device.h) work since it doesn't seem to have access to the x value