AI Necromancer
AI Necromancer
@mrluin hi, I've bounced back and forth about how is best to do this. The authors doing it this way doesn't automatically make it the best approach. I think there...
Hi @albert-ba, for now I suggest training on 1 GPU with 2 batch size. It will take you a couple hours per epoch. It’s still not fast enough but we’re...
@fanrupin currently the multi-gpu doesn't always work faster. I have to look into this further but I don't have time now. If you have time to experiment and find the...
@Harkirat155 Really want to thank you for this. It's not quite ready for master yet as many things have changed. I've created a pull request to @memetics19, he will try...
Same problem here, it has something to do with the ASPP layer. @dagongji10 check the global avg pooling and upsample layers are defined in the forward function. This might be...
OKay, looks like I found it, I've been training for several minutes and the memory looks steady.. add self.level_2 = [] self.level_4 = [] self.level_8 = [] self.level_16 = []...
@dagongji10 I also took out the ASPP layer, but on its own it didn't change anything. It's possible the problem is with both. I check now.
@dagongji10 runs fine with the ASPP layer, though I'm not sure if it's a good idea to define the upsample function and global averaging in the forward functions
I get this error too
I just change the output to output a single train_loader, @dagongji10 do you see any reason we need two?