Sil van de Leemput

Results 21 comments of Sil van de Leemput

@cetmann Hi thanks for letting me know. I am currently on vacation, but I'll have a look at it once I am back.

Hi @cetmann, I have had a look at the table from your paper. Memory consumption during training is a bit complicated both to explain and to measure and requires some...

Hi, thanks for your interest in MemCNN. Sadly, this is not supported at the moment. The memory saving mechanism uses a similar mechanism as checkpointing. Checkpointing with data-parallel is known...

Hi, thank you for your interest in MemCNN. You have identified an interesting behavior, which wasn't present in PyTorch 1.7.0 (last tested version for MemCNN). Apparently, it only happens when...

@lighShanghaitech There is a use-case in my tests that required me to do so, and I am pretty sure some tests fail if I remove those lines. But I'll have...

Hi @lighShanghaitech, thanks for using MemCNN. I think your approach is reasonable. Just out of curiosity, why do you need n=3 to reset and not n=2? As an alternative approach,...

@lighShanghaitech That's interesting, could you maybe provide some example code to reproduce the problem? Then I can have a look.

Yes, it should support multi-gpu training. Please, let me know if you run into any issues.

I assume that what you are showing is the output of the `nvidia-smi` command. The ReversibleBlock of MemCNN by default frees memory after assigning the memory. So the memory might...

Hi, just a small comment on your example code is that the inverse of the `Fwd` Module looks wrong, since 0 isn't the inverse of `x + f(x)`. I assume...