ElegantRL
ElegantRL copied to clipboard
bug:TypeError: Value after * must be an iterable, not int
Run the FinRL_MultiCrypto_Trading.py got error, please fix it
binance successfully connected
tech_indicator_list: ['macd', 'rsi', 'cci', 'dx']
indicator: macd
indicator: rsi
indicator: cci
indicator: dx
Succesfully add technical indicators
Successfully transformed into array
| Arguments Remove cwd: ./ppo
Traceback (most recent call last):
File "/Users/quran/SourceCode/easy_live/FinRL-Meta/tutorials/3-Practical/FinRL_MultiCrypto_Trading.py", line 69, in <module>
train(
File "/Users/quran/SourceCode/easy_live/FinRL-Meta/train.py", line 51, in train
trained_model = agent.train_model(
File "/Users/quran/SourceCode/easy_live/FinRL-Meta/agents/elegantrl_models.py", line 79, in train_model
train_and_evaluate(model)
File "/Users/quran/opt/anaconda3/envs/finrl-meta/lib/python3.10/site-packages/elegantrl/train/run.py", line 95, in train_and_evaluate
agent = init_agent(args, gpu_id, env)
File "/Users/quran/opt/anaconda3/envs/finrl-meta/lib/python3.10/site-packages/elegantrl/train/run.py", line 24, in init_agent
agent = args.agent_class(args.net_dim, args.state_dim, args.action_dim, gpu_id=gpu_id, args=args)
File "/Users/quran/opt/anaconda3/envs/finrl-meta/lib/python3.10/site-packages/elegantrl/agents/AgentPPO.py", line 40, in __init__
AgentBase.__init__(self, net_dim, state_dim, action_dim, gpu_id, args)
File "/Users/quran/opt/anaconda3/envs/finrl-meta/lib/python3.10/site-packages/elegantrl/agents/AgentBase.py", line 57, in __init__
self.act = act_class(net_dim, state_dim, action_dim).to(self.device)
File "/Users/quran/opt/anaconda3/envs/finrl-meta/lib/python3.10/site-packages/elegantrl/agents/net.py", line 397, in __init__
self.net = build_mlp_net(dims=[state_dim, *dims, action_dim])
TypeError: Value after * must be an iterable, not int
Process finished with exit code 1
I have the same issue. I'm using the scripts from the elegantrl/train directory and the net.py from the elegantrl/agents directory. In net.py the error appears at the build_mlp_net command in the class ActorDiscretePPO(ActorBase) definition, where in the attributes *dims is an int type where dims[int] is expected.
One additional remark, there are several versions of the same class (e.g. Arguments Class in run.py and config.py) and different versions of some scripts in different directories (e.g. net.py in elegantrl dir, elegantrl/tutorial dir and elegnatrl/agents dir or run.py in elegantrl dir, elegantrl/tutorial dir and elegantrl/train dir). Is there any documentation which versions are compatible with eachother ? Is the idea that all scripts in teh same directory should be compatible ?
[1] I noticed it works with an earlier gym version but with additional changes (see point [2] below). Downgrade gym
to version 0.24.0
as follows.
pip3 install gym==0.24.0
[2] Before the train_and_evaluate(args)
in the notebook, add the following
if isinstance(args.net_dim, int):
args.net_dim = np.array([args.net_dim])
print(type(args.net_dim))
While none of these is a permanent solution, it helps you unblock and may help in ultimately fixing the problems in the source code.
I can't seem to find the train_and_evaluate function in https://github.com/AI4Finance-Foundation/FinRL-Tutorials/blob/master/3-Practical/FinRL_MultiCrypto_Trading.ipynb
train_and_evaluate() is now gone from train.run module...