memcnn
memcnn copied to clipboard
PyTorch Framework for Developing Memory Efficient Deep Invertible Networks
* MemCNN version: 1.5.1 * PyTorch version: 1.10.1 * Python version: 3.8.12 * Operating System: Windows 10 ### Description Hi, thank you for making this super useful library publicly available....
Hello~ I am wondering if memcnn works for data parallel. When I use it with data parallell for multi-GPU running, all layers have requires_grad=False. * MemCNN version: 1.5.0 * PyTorch...
Hi @silvandeleemput, Thanks for your code! Could you give a simple example of how to do classification using memcnn?
* MemCNN version:newest * PyTorch version:1.6 * Python version:3.6 * Operating System:ubuntu ### Description I want to implement the inversion function of MLP, when using AdditiveCoupling, It works ![image](https://user-images.githubusercontent.com/37701943/127181477-3760972d-5de3-43a4-ab9d-bdfda3680b9e.png) but...
* MemCNN version: 1.4.0 * Python version: 3.7 * Operating System: Ubuntu16.06 ### Description Hi @silvandeleemput. I am using Memcnn for blocks with dropout layer inside. I find the inverse...
``` # Setting the gradients manually on the inputs and outputs (mimic backwards) for element, element_grad in zip(inputs, gradients[:ctx.num_inputs]): element.grad = element_grad for element, element_grad in zip(outputs, grad_outputs): element.grad =...
Hi! I did some benchmarking recently and found that the memory demand is not _quite_ independent of the depth, see Table 3 on the last page on https://arxiv.org/abs/2005.05220 My suspicion...
I've added new tests for the memory saving of the Reversible Block to show that `keep_input=True` uses more memory than `keep_input=False`. There are different implementations for GPU RAM and CPU...
* MemCNN version: latest * Python version: 3.7 * Operating System: Ubuntu16.06 Hi, In the function called "create_coupling" (from revop.py), there are two implementation mode related variable (i.e. implementation_fwd and...