Deep-reinforcement-learning-with-pytorch icon indicating copy to clipboard operation
Deep-reinforcement-learning-with-pytorch copied to clipboard

SAC_Bug

Open aut6620 opened this issue 2 years ago • 3 comments

in sac.py s = torch.tensor([t.s for t in self.replay_buffer]).float().to(device) Traceback (most recent call last): File "D:\PycharmProject\Deep-reinforcement-learning-with-pytorch-master\Char09 SAC\SAC.py", line 307, in main() File "D:\PycharmProject\Deep-reinforcement-learning-with-pytorch-master\Char09 SAC\SAC.py", line 293, in main agent.update() File "D:\PycharmProject\Deep-reinforcement-learning-with-pytorch-master\Char09 SAC\SAC.py", line 244, in update Q_loss.backward(retain_graph = True) File "C:\Users\lx\anaconda3\envs\torch\lib\site-packages\torch_tensor.py", line 363, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "C:\Users\lx\anaconda3\envs\torch\lib\site-packages\torch\autograd_init_.py", line 173, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Found dtype Double but expected Float

aut6620 avatar May 23 '22 13:05 aut6620

How to deal with it?

zhaoyanghandd avatar Jun 09 '22 13:06 zhaoyanghandd

        V_loss = self.value_criterion(excepted_value, next_value.detach()).mean()  # J_V

        # Dual Q net
        Q1_loss = self.Q1_criterion(excepted_Q1.float(), next_q_value.detach()**.float()**).mean() # J_Q

        # Q1_loss = Q1_loss.folat()

        Q2_loss = self.Q2_criterion(excepted_Q2.float(), next_q_value.detach().float()).mean()
        # Q2_loss = Q2_loss.float()

        pi_loss = (log_prob.float() - excepted_new_Q.float()).mean() # according to original paper

image

aut6620 avatar Jun 09 '22 14:06 aut6620

1、change all the dtype to float 2、then i met the next bug,the picture is what i had done

aut6620 avatar Jun 09 '22 14:06 aut6620