FinRL
FinRL copied to clipboard
Error When Training Agent
When training the agent using the FinRL, I keep getting the same error that the tensor position must be a tuple not map, and not quite sure how to address the issue?
I am using a Google Collab on Mac. ERROR:
TypeError Traceback (most recent call last)
4 frames /usr/local/lib/python3.7/dist-packages/elegantrl/agents/AgentBase.py in convert_trajectory(failed resolving arguments) 286 287 '''stack items''' --> 288 traj_state = torch.stack(traj_list1[0]) 289 traj_action = torch.stack(traj_list1[3]) 290
TypeError: stack(): argument 'tensors' (position 1) must be tuple of Tensors, not map
This problem can be fixed at the ElegantRL side, by change this line to be:
traj_list1 = list(map(list, zip(*traj_list))) # state, reward, done, action, noise
However, there are still some other bugs at the ElegantRL side.