ElegantRL
ElegantRL copied to clipboard
Bug Of Agent init
in the AgentBase.py AgentBase.init() :
class AgentBase:
def __init__(self, net_dim: int, state_dim: int, action_dim: int, gpu_id: int = 0, args: Arguments = None):
...
'''network'''
act_class = getattr(self, "act_class", None)
cri_class = getattr(self, "cri_class", None)
self.act = act_class(net_dim, self.num_layer, state_dim, action_dim).to(self.device)
self.cri = cri_class(net_dim, self.num_layer, state_dim, action_dim).to(self.device) \
if cri_class else self.act
the code pass the num_layer when init the act ,but in the agent/net.py, most of the networks init not receive num_layer param, such as QNetTwinDuel:
class QNetTwinDuel(nn.Module): # D3QN: Dueling Double DQN
def __init__(self, mid_dim, state_dim, action_dim):
super().__init__()
self.net_state = nn.Sequential(
nn.Linear(state_dim, mid_dim),
nn.ReLU(),
nn.Linear(mid_dim, mid_dim),
nn.ReLU(),
)
this will cause agent init error.
thanks for reporting it. recently elegantrll is updated frequently.
The follow Pull request fix this bug ↓ Fix bug for vec env and agentbase init https://github.com/AI4Finance-Foundation/ElegantRL/pull/248