ElegantRL icon indicating copy to clipboard operation
ElegantRL copied to clipboard

Massively Parallel Deep Reinforcement Learning. 🔥

Results 156 ElegantRL issues
Sort by recently updated
recently updated
newest added

File "E:\ElegantRL-master (1)\ElegantRL-master\elegantrl\agents\AgentDQN.py", line 117, in update_objectives if self.if_fit_cum_r: AttributeError: 'AgentDoubleDQN' object has no attribute 'if_fit_cum_r'

Why is warm up for ReplayBuffer used in helloworld_DQN_single_file.py but not in elegantrl?

May I ask why if_random before while true was deleted in training?

Hi, In this [paper](https://arxiv.org/pdf/2112.05923) The Stock trading environment class that was used is Class StockTradingEnv from _elegantrl/envs/StockTradingEnv.py_ ? Or is it class StockEnvNAS100 from _finrl.meta.env_stock_trading.env_nas100__wrds ? Thanks

Hey, I've been using ElegantRL for a while now and also compared it to several other RL frameworks. For me, it's a lot easier to use than SB3 or Raylib,...

https://github.com/AI4Finance-Foundation/ElegantRL/blob/5e828af1503098f4da046c0f12432dbd4ef8bd97/elegantrl/agents/AgentPPO.py#L203

Hi, In my quest to understand how the code works, I noticed in the _run_ function of the _PipeWorker_ class, this line: `gpu_id = args.learner_gpus` But _args.learner_gpus_ seems to be...

Hello, Thanks for this amazing project that aims at solving the bottleneck of environnement interactions in DRL. I have read the paper several times and I am starting to make...

~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/AgentBase.py in explore_one_env(self, env, horizon_len, if_random) 91 get_action = self.act.get_action 92 for t in range(horizon_len): ---> 93 action = torch.rand(1, self.action_dim) * 2 - 1.0 if if_random else get_action(state) 94...

~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/AgentSAC.py in __init__(self, net_dims, state_dim, action_dim, gpu_id, args) 15 self.act_class = getattr(self, 'act_class', ActorSAC) 16 self.cri_class = getattr(self, 'cri_class', CriticTwin) ---> 17 super().__init__(net_dims=net_dims, state_dim=state_dim, action_dim=action_dim, gpu_id=gpu_id, args=args) 18 self.cri_target =...