ElegantRL
ElegantRL copied to clipboard
Massively Parallel Deep Reinforcement Learning. 🔥
Firstly, thanks for your excellent work. Recently, I plan to apply ElegantRL in my research work, and I'm trying to accelerate training and sampling process with SubprocVecEnv in Stable Baselines...
We will try to provide flexible parameter setting for users. We will use classes to replace dict in parameter setting. For example, ConfigPPO is a class, which inherits from a...
demo_ Maddpg does not work. Can someone give a demoof multi-agent algorithm implementation? Such as mappo Thank you very much.
There are bug in `if_act_target` and `if_cri_target` in `AgentBase.py` `class AgentXXX.__init__()` should set `args.if_cri_target = getattr(self, "if_cri_target", True)` (or False) in default. Takes AgentDQN as an example, the following code...
When creating a neural network, we can specify the network width, but we can't set the number of layers, and I want to add the hyperparameter `num_layer`. To be compatible...
To use `multiprocessing`, the following code should be moved into `if __name__ == '__main__':` https://github.com/AI4Finance-Foundation/ElegantRL/blob/91c1c3d8747d5a3dcf83d4051ba15250b4a750a8/examples/tutorial_BipedalWalker-v3.py#L29-L48 .
想利用elegantRL的多进程特性训练在Atari环境上的模型,但是可能是版本更新太快的原因,在examples里找不到可以直接运行的Demo
Hello, I ran elegant rl with finrl processor using the function DRLAgent_erl.DRL_prediction and it seemed to not be deterministic.
Hi, Is it possible to obtain feature importance plots from the agents? Or perhaps getting it to work with the SHap library? Thank you!