ElegantRL
ElegantRL copied to clipboard
昨天更新后使用model训练时出现问题
环境: Colab, Elegantrl Model: 测试过SAC, ModSAC, TD3 都报下面相同错误 Issue: init() takes 4 positional arguments but 5 were given
31 trained_model = agent.train_model(model=model, 32 cwd=cwd, ---> 33 total_timesteps=break_step) 34 else: 35 raise ValueError('DRL library input is NOT supported. Please check.')
/usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentTD3.py in init(self, net_dim, state_dim, action_dim, gpu_id, args) 22 self.act_class = getattr(self, "act_class", Actor) 23 self.cri_class = getattr(self, "cri_class", CriticTwin) ---> 24 super().init(net_dim, state_dim, action_dim, gpu_id, args) 25 self.policy_noise = getattr( 26 args, "policy_noise", 0.15
/usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentDDPG.py in init(self, net_dim, state_dim, action_dim, gpu_id, args) 25 self.act_class = getattr(self, "act_class", ActorSAC) 26 self.cri_class = getattr(self, "cri_class", CriticTwin) ---> 27 super().init(net_dim, state_dim, action_dim, gpu_id, args) 28 self.act.explore_noise = getattr( 29 args, "explore_noise", 0.1
/usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentBase.py in init(self, net_dim, state_dim, action_dim, gpu_id, args)
45 act_class = getattr(self, "act_class", None)
46 cri_class = getattr(self, "cri_class", None)
---> 47 self.act = act_class(net_dim, self.num_layer, state_dim, action_dim).to(self.device)
48 self.cri = cri_class(net_dim, self.num_layer, state_dim, action_dim).to(self.device)
49 if cri_class else self.act
TypeError: init() takes 4 positional arguments but 5 were given
我使用老版本解决了这个问题。 等待后续更新。