memcnn
memcnn copied to clipboard
Problems when memcnn is used with data parallel
Hello~ I am wondering if memcnn works for data parallel. When I use it with data parallell for multi-GPU running, all layers have requires_grad=False.
- MemCNN version: 1.5.0
- PyTorch version: 1.8
- Python version: 3.7
- Operating System: Ubuntu 18.4
Thanks~
Hi, thanks for your interest in MemCNN. Sadly, this is not supported at the moment. The memory saving mechanism uses a similar mechanism as checkpointing. Checkpointing with data-parallel is known to give issues, see this thread: https://github.com/pytorch/pytorch/issues/24005. If you manage to get it to work or find the problem, please let me know! It would be great if MemCNN could support data-parallel.