Darth Coder

Results 8 comments of Darth Coder

@taylorlulu @ROSNovice @AbelSyx @happy-yasuo404 @Zhaofeng-Tian Hey guys, I have two questions Q1. Does anyone have the right code? This git hasn't Novel reward which is considering regions (green or red)...

Thank you @taylorlulu !. it was helpful :> By the way, i find a code which is in [ model_states.py](https://github.com/NithishkumarS/DWA-RL/blob/main/openai_ros/openai_ros/src/openai_ros/task_envs/turtlebot2/model_states.py) . From this code my opinion is the author did...

For reader who has this issue or code. I hope that this will be helpful to you. I create a code which is similar to [model_states.py](https://github.com/NithishkumarS/DWA-RL/blob/main/openai_ros/openai_ros/src/openai_ros/task_envs/turtlebot2/model_states.py). this is a code....

How are you @gustavodemari ? In my opinion, it is not a bug. See this [link](https://github.com/HumanCompatibleAI/imitation/blob/d74e90368e9b2ee82819d78145b50e122a59d970/src/imitation/data/rollout.py#L563), ```flatten_trajectories``` creates ```next_obs``` and ```dones``` automatically. In this [code](https://github.com/HumanCompatibleAI/imitation/blob/master/src/imitation/algorithms/adversarial/common.py#L424) which is used in GAIL...

Hi @saeed349!. Here is my code. ``` dense_rollouts = rollout.rollout( dense_expert, DummyVecEnv([lambda: RolloutInfoWrapper(dense_env)]), rollout.make_sample_until(min_timesteps=None, min_episodes=250), rng=dense_rng, ) dense_transitions = rollout.flatten_trajectories(dense_rollouts) dense_bc = CustomBC( observation_space=dense_env.observation_space, action_space=dense_env.action_space, policy = dense_expert.policy, demonstrations=dense_transitions, rng=dense_rng,...

@1213142332 How are you? Actually it is merge published code and paper.

Oh thank you for your answer @LifGorg !. I will try that :). By the way, i have three questions about your code. 1. When you take a prediction data...