da-rnn
da-rnn copied to clipboard
Is there any room for gpu memory improvements?
Hi, it looks like the loop inside the decoder consumes too much gpu memory very quickly if you try to increase history size because we keep tracking hidden and cell units. I wonder if there is room for any improvements. Would it kill the purpose of the model if we detach some things somehow?
I have no idea. I have no experience optimizing GPU execution. Are you
using the jit
branch?
On Sat, 15 Feb 2020 at 06:02, Onur [email protected] wrote:
Hi, it looks like the loop inside the decoder consumes too much gpu memory very quickly if you try to increase history size because we keep tracking hidden and cell units. I wonder if there is room for any improvements. Would it kill the purpose of the model if we detach some things somehow?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/Seanny123/da-rnn/issues/17?email_source=notifications&email_token=AAOUO2N7TYNQLXRFOLUSDXLRC7DUJA5CNFSM4KVXTJM2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4INYMT3A, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOUO2N34DKS7JAAYH6CKGTRC7DUJANCNFSM4KVXTJMQ .
Hi, I was using master.