normalizing_flows icon indicating copy to clipboard operation
normalizing_flows copied to clipboard

RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22)

Open yqwu94 opened this issue 4 years ago • 3 comments

Hi, I met a cuda runtime error as following: RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22) Recently, I am studying normalizing flow, such as Glow, however, a strange svd problem has arisen when I try to train Glow from scratch. In my opinion, due to Glow contains “tensor.slogdet()” operation in affine coupling layer, it may involve SVD decomposition, and thus casue above problem. Specifically, I first use a small learning rate, such as 1e-6, the training loss begins to fall slowly. However, when the learning rate reaches 0.0004, the training loss has a sudden rise (inf) and the error information is presented as above. How can I avoid this error during training process of Glow?

yqwu94 avatar Nov 25 '20 13:11 yqwu94

Hi - when you say 'when the learning rate reaches 0.0004', it sounds like you are increasing the learning rate during training. Is that what you are doing or are you starting training with a new learning rate and keeping it fixed for the duration of training? What dataset are you using?

On Wed, Nov 25, 2020 at 8:55 AM yqwu94 [email protected] wrote:

Hi, I met a cuda runtime error as following: RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 22) Recently, I am studying normalizing flow, such as Glow, however, a strange svd problem has arisen when I try to train Glow from scratch. In my opinion, due to Glow contains “tensor.slogdet()” operation in affine coupling layer, it may involve SVD decomposition, and thus casue above problem. Specifically, I first use a small learning rate, such as 1e-6, the training loss begins to fall slowly. However, when the learning rate reaches 0.0004, the training loss has a sudden rise (inf) and the error information is presented as above. How can I avoid this error during training process of Glow?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kamenbliznashki/normalizing_flows/issues/11, or unsubscribe https://github.com/notifications/unsubscribe-auth/AG3JPP6SIOMM37F4QWF5EVDSRUED5ANCNFSM4UCNXIRQ .

kamenbliznashki avatar Dec 31 '20 16:12 kamenbliznashki

Hi, I'm also facing a similar problem. RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 11)

Dataset: mnist torchvision 0.8.2 python 3.8.5 PyTorch 1.6.0 module load cudnn/7-cuda-10.0 model: Glow

" python -m torch.distributed.launch --nproc_per_node=3
flow_main.py --train
--distributed
--dataset=mnist
--n_levels=3
--depth=32
--width=512
--batch_size=16
--generate
--n_epochs=10 \ "

Error

File "flow_main.py", line 489, in train_epoch loss.backward() File "/home/sandeep.nagar/anaconda3/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/sandeep.nagar/anaconda3/lib/python3.8/site-packages/torch/autograd/init.py", line 130, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/sandeep.nagar/anaconda3/lib/python3.8/site-packages/torch/autograd/init.py", line 130, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/sandeep.nagar/anaconda3/lib/python3.8/site-packages/torch/autograd/init.py", line 130, in backward Variable._execution_engine.run_backward( RuntimeError: svd_cuda: the updating process of SBDSDC did not converge (error: 11)

Naagar avatar Jan 05 '21 16:01 Naagar

Any updates on this issue?

pandya6988 avatar Oct 29 '21 14:10 pandya6988