robosuite-benchmark icon indicating copy to clipboard operation
robosuite-benchmark copied to clipboard

RunTime error when using train.py

Open caishanglei opened this issue 3 years ago • 1 comments

Traceback (most recent call last): File "scripts/train.py", line 131, in run_experiment() File "scripts/train.py", line 104, in run_experiment experiment(variant, agent=args.agent) File "/home/luai/1/robosuite/robosuite-benchmark/util/rlkit_utils.py", line 163, in experiment algorithm.train() File "/home/luai/1/robosuite/robosuite-benchmark/util/rlkit_custom.py", line 46, in train self._train() File "/home/luai/1/robosuite/robosuite-benchmark/util/rlkit_custom.py", line 235, in _train self.trainer.train(train_data) File "/home/luai/1/robosuite/robosuite-benchmark/rlkit/rlkit/torch/torch_rl_algorithm.py", line 40, in train self.train_from_torch(batch) File "/home/luai/1/robosuite/robosuite-benchmark/rlkit/rlkit/torch/sac/sac.py", line 144, in train_from_torch policy_loss.backward() File "/home/luai/anaconda3/envs/rl/lib/python3.6/site-packages/torch/tensor.py", line 245, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/luai/anaconda3/envs/rl/lib/python3.6/site-packages/torch/autograd/init.py", line 147, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [256, 1]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

caishanglei avatar Jul 05 '21 03:07 caishanglei

hi,I have got the same problem.you can change version of torch

csufangyu avatar Aug 03 '21 12:08 csufangyu