motion-planner-reinforcement-learning
motion-planner-reinforcement-learning copied to clipboard
End to end motion planner using Deep Deterministic Policy Gradient (DDPG) in gazebo
Hello, I am a RL learner who engage in EMS of HEV, and I also meet the problem of action satuation that you had met in your past project. I'm...
I just want to know the version of tensorflow
Hi, I see in this code you are outputting both linear and angular velocity and would like to know how you got around the action saturation problem?
how to set different goals instead of random thanks a lot
hello, thanks for your work. when i compile your code, an issue has interpreeted me,and i have not any solution. #SpawnModel: Failure - model name target already exists. did you...
你好!请问下如果我想复现这篇论文的代码(DDPG改成ADDPG),应该怎么处理,能否加个qq好友学习交流下,qq:2940620847,多谢了
Hello!There's a ValueError when I run .py to test old models. ValueError: Cannot feed value of shape (1, 366) for Tensor 'Placeholder:0', which has shape '(?, 16)' Is that caused...
Hello, I tried your package in ROS melodic of Python 2.7 and it worked. So I'd like to test it in a bigger input environment this time with the Hokuyo...
你好, actor_network输出线速度[0,1]角速度时[-1,1],但是在agent_stage_1.py中用 a[0] = np.clip(np.random.normal(a[0], var), 0., 1.) a[1] = np.clip(np.random.normal(a[1], var), -0.5, 0.5),角速度这样处理时为什么。 最后一层是否用batch_norm有影响吗?
Has this ever happened to anyone?