autobw.torch
autobw.torch copied to clipboard
TODO
Nice package, I have to play with it for a bit to get a feel. How complete is it, and what next?
For now it works at the level of torch's nn modules. In that sense it's complete, since it works fine as long as all the computation happens in nn.Modules and nn.Criterions, but error handling is probably suboptimal.
On Sat, May 2, 2015 at 6:56 PM, Soumith Chintala [email protected] wrote:
Nice package, I have to play with it for a bit to get a feel. How complete is it, and what next?
— Reply to this email directly or view it on GitHub https://github.com/bshillingford/autobw.torch/issues/2.
Hi Brendan,
I guess the question a lot of people would be thinking is, apart from the elegance autobw
brings, have you noticed any speeds ups?
Basically by writing a super efficient forward pass, which gets rid of excess storage, (either nn
containers or nngraphs
), have you noticed you get a faster objective function?
It's on my TODO list to implement it for the system I'm working on at the moment, but it'll be maybe a week before I can do the experiments.
Best regards,
Aj
I don't have anything to directly compare, but I doubt it. The actual computation should be dominant. On May 4, 2015 2:20 AM, "Ajay Talati" [email protected] wrote:
Hi Brendan,
I guess the question a lot of people would be thinking is, apart from the elegance autobw brings, have you noticed any speeds ups?
Basically by writing a super efficient forward pass, which gets rid of excess storage, (either nn containers or nngraphs), have you noticed you get a faster objective function?
It's on my TODO list to implement it for the system I'm working on at the moment, but it'll be maybe a week before I can do the experiments.
Best regards,
Aj
— Reply to this email directly or view it on GitHub https://github.com/bshillingford/autobw.torch/issues/2#issuecomment-98564226 .
Thanks !
I just thought you might like a look at this, variation of your LSTM gModule.
https://gist.github.com/skaae/ea4320e17379d408e693
Søren Kaae Sønderby, the guy who wrote the code says it give's a 40% speed up, but unfortunately I hav'nt been able to reproduce that? Maybe you want to give it a try?
Yes that should definitely give a speedup over the Google version. Take a look at oxnn as well by Tomas from our language group (on my phone so I won't add a link), it does that in addition to some in-place ops for further speedup. It can be incorporated into the nngraph version easily. On May 4, 2015 3:53 AM, "Ajay Talati" [email protected] wrote:
Thanks !
I just thought you might like a look at this, variation of your LSTM gModule.
https://gist.github.com/skaae/ea4320e17379d408e693
The Soren guy who wrote the code says it give's a 40% speed up, but unfortunately I hav'nt been able to reproduce that? Maybe you want to give it a try?
— Reply to this email directly or view it on GitHub https://github.com/bshillingford/autobw.torch/issues/2#issuecomment-98575815 .