RL-PPO-Keras icon indicating copy to clipboard operation
RL-PPO-Keras copied to clipboard

Proximal Policy Optimization(PPO) with Keras Implementation

Results 4 RL-PPO-Keras issues
Sort by recently updated
recently updated
newest added

After trying, gym 0.26.2 does not work, I reduce the use of 0.25.2 is feasible

Hi, yust went thru your code and found 2 little fixes: -if self.dic_agent_conf["OPTIMIZER"] is not "Adam" and RMSProp or fallback Adam are used, they didn't had an Loss defined -Entropy...

Hi, I think this function: _shared_network_structure is not doing what we expect actually this adds different layers with different parameters each time we call it, not shared layer

Hoping that this project is not abandoned and you're willing to patch this: When trying to load a saved model using the Agent.load_model method it will throw the exception: 'keras...