ENAS-pytorch icon indicating copy to clipboard operation
ENAS-pytorch copied to clipboard

RuntimeError: grad can be implicitly created only for scalar outputs

Open Shn9909 opened this issue 3 years ago • 0 comments

I encountered this strange error. Here is the output, thank you. Before, it was showing that the error cannot run on CPU and GPU at the same time, I added . cuda() after loss, it starts showing this error.

Traceback (most recent call last): File "D:/xiangmu/ENAS-pytorch-master/main.py", line 56, in main(args) File "D:/xiangmu/ENAS-pytorch-master/main.py", line 35, in main trnr.train() File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 223, in train self.train_shared(dag=dag) File "D:\xiangmu\ENAS-pytorch-master\trainer.py", line 317, in train_shared loss.backward() File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch_tensor.py", line 307, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd_init_.py", line 150, in backward grad_tensors_ = make_grads(tensors, grad_tensors) File "C:\Users\sunhaonan.conda\envs\enas\lib\site-packages\torch\autograd_init_.py", line 51, in _make_grads raise RuntimeError("grad can be implicitly created only for scalar outputs") RuntimeError: grad can be implicitly created only for scalar outputs

Shn9909 avatar Oct 09 '22 00:10 Shn9909