adaptive-style-transfer-pytorch icon indicating copy to clipboard operation
adaptive-style-transfer-pytorch copied to clipboard

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation

Open zerclanzhang opened this issue 2 years ago • 0 comments

hello dude, my conda env is Pytorch 1.10 + Cuda 11.3 +Cudnn 8.0+Python 3.6, my GPU is RTX3090 so i cannot use Pytorch 1.0 version as you showed. The training process reported an error:

Processes are started. 0%| | 0/200000 [00:00<?, ?it/s]C:\ProgramData\Anaconda3\envs\pytorch110\lib\site-packages\torch\optim\lr_scheduler.py:134: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) 0%| | 0/200000 [00:07<?, ?it/s] Traceback (most recent call last): File "train.py", line 126, in discr_success = trainer.update(batch_art, batch_content, opts, discr_success, alpha, discr_success >= win_rate) File "L:\AI-Photo-Art\adaptive-style-transfer-pytorch\model.py", line 104, in update d_acc = self.dis_update(batch_art_preds, batch_content_preds, batch_output_preds, options) File "L:\AI-Photo-Art\adaptive-style-transfer-pytorch\model.py", line 80, in dis_update self.discr_loss.backward() File "C:\ProgramData\Anaconda3\envs\pytorch110\lib\site-packages\torch_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "C:\ProgramData\Anaconda3\envs\pytorch110\lib\site-packages\torch\autograd_init_.py", line 156, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [3, 32, 7, 7]] is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Could you take a look at this and solve it? Many thanks~

zerclanzhang avatar Nov 22 '21 06:11 zerclanzhang