gym-vrep icon indicating copy to clipboard operation
gym-vrep copied to clipboard

Dynamic programming

Open lamare3423 opened this issue 2 years ago • 6 comments

How can we add dynamic bellman equation for reward function? It will be more sensitive rewards for us. thank u

lamare3423 avatar Jan 08 '22 17:01 lamare3423

So how can we implement iterations for policy and value statements. For any idea mail me. [email protected]

lamare3423 avatar Jan 08 '22 20:01 lamare3423

@lamare3423 What do you mean by saying "add dynamic bellman equation for reward function? Do you want to customize the reward function? This environment is ready to use with any RL algorithm. Just create an env. reset() method is for recovering the env state to the initial one and returns the initial state. step() method returns the next state, reward and the information about the termination. This information is sufficient to perform value or policy iteration or even more complicated algos like SAC, PPO etc.

gbartyzel avatar Jan 16 '22 14:01 gbartyzel

@Souphis i want to understand something. If we imply dynamic reward function, our reward function can be more success. Is it true . for example how can we customize our reward function for your work. Should we write the code to be prepared to create the reward function with dynamic programming in our main function or code it into the agent we will use? Have you got any examples ? For example How to convert the reward function into a dynamic reward function using the ddpg algorithm and does it help? Thanks,

lamare3423 avatar Jan 18 '22 14:01 lamare3423

@lamare3423 Oh, okay, so you want to change the reward function during learning? There are two solutions:

  • create a wrapper for my environment (check https://github.com/openai/gym/tree/master/gym/wrappers)
  • create your own agent (e.g. DDPG) that modify the reward during learning (check curiosity reinforcement learning)

gbartyzel avatar Jan 18 '22 14:01 gbartyzel

@Souphis First of all thank u for all information. You said that , modify the reward function during learning in your agent. İ read lots of thing but i dont understand well , how can i implement it my environment and agent with using code? My biggest problem is here ; When I have a scene like the one in the figure, with the robot hexagon and the target star, my results are successful. (Click here for successful environment) My results fail when I have a scene like the one in the figure with robot hexagon, target star. In other words, when the distance between the target and the robot is close, the robot constantly crashes into obstacles. (click here for failure environment)

I'm dealing with a mobile robot that can avoid obstacle and go to the target. I'm encountering and trying to solve the situations I mentioned above with what I've been able to do so far. I've used Pyrep and I'm working with the ddpg agent. I don't know how to make the changes according to the situations you suggest. What should I change in the agent itself and its network updates? For example, I created a function called build critic train method in my ddpg agent code, do I need to make changes related to the reward function in this part?

lamare3423 avatar Jan 22 '22 23:01 lamare3423

hi sir, i prepare a study (master degree) ((deep reinforcement learning approach based on dynamic path planning for mobile robot) and i found research close to my study https://github.com/dranaju/project Due to i am new in programming i couldn't run the code there were some error as you could see in attached file, sir could you help me with that (run the code) it will be a great favor. 1

alisalim70 avatar Nov 13 '23 09:11 alisalim70