Code-for-MPELU
Code-for-MPELU copied to clipboard
about using MPELU in pytorch
I wonder if this torch version of code can run in Pytorch? I use Pytorch. I don't know how to use the module you wrote in pytorch.
Sorry, The code is currently for Caffe and Torch. I haven't implemented the PyTorch version. MPELU can be factorized as the production of ELU and PRELU. So it is convenient to write an inefficient version in PyTorch. For the efficient implementation in C++, I found it nontrivial. I will update the code when it's done.
Thank you for your reply! Yes, I have implemented similar activation functions with parameters in pytorch by inheriting torch.autograd. Function. An obvious problem is that the computing speed is very slow and takes up a lot of memory. Moreover, when I tested in the network, I found that loss is always NAN. I suspect that there may be a problem with my back-propagation gradient calculation code, which is why I want to obtain the implementation of the original version.
Now MPELU has supported pytorch.