rq-vae-transformer
rq-vae-transformer copied to clipboard
Support notebook execution on smaller devices
I was able to get the full-size parameter set working locally on my personal dev machine (16 samples on a RTX 3090,) but I had to disable mixed precision and specify map_location as cuda
. A few people on reddit suggested that they'd like to be able to do the same, so I added some support for it in the notebook utils and notes in the notebook itself.
I've not worked with pytorch in enough different environments to know why exactly having the default map_location be cpu
didn't work for me, so I tried to be unintrusive and left the default for the notebook as cpu
. If you want to change it to cuda
by default - I don't know the positives/negatives of each other than that cpu
wasn't working on my machine - I can modify this PR to do that instead.