deepworlds
deepworlds copied to clipboard
Deep Mimic example
Relevant discussion here #18, suggestion by rohit-kumar-j.
Original Deep Mimic implementation
Basic Deep Mimic example could include a "teacher" cartpole robot that uses a PID controller and a "student" cartpole robot that is exactly the same as the existing cartpole example using RobotSupervisor, plus an emitter/receiver scheme to receive information from the "teacher" cartpole robot.
@all-contributors please add @rohit-kumar-j for ideas
@all-contributors please add @rohit-kumar-j for ideas
@tsampazk, Thank you for adding me as a contributor!
I am currently working on the Deep Mimic Example in pybullet. Testing methods that would help to parent the stock humanoid so any other similarly structured robot can be used for training without much initial setup. Here are some results:
In this video, the stock humanoid is driving the custom-designed robot using inverse kinematics (I'm hoping that this will be the reward function for the robot during training):
https://user-images.githubusercontent.com/37873142/108199067-7b93dc00-7142-11eb-9897-6dbe339632fb.mp4
Once, complete, hopefully, we can port this example to Webots.
Warm Regards, Rohit
@rohit-kumar-j This looks really promising Rohit! I'm looking forward to seeing the complete example, so as to start working on porting it to Webots. I think it would make for an impressive example to be added in the deepworlds repository.
@tsampazk, I agree. Unfortunately, I do not know Webots code-base and methods, hence, I can help out with the logic and implementation while simultaneously learning Webots. I hope it is okay if I post updates on the example in this thread itself.
Warm Regards, Rohit Kumar J
I hope it is okay if I post updates on the example in this thread itself.
@rohit-kumar-j Yeap sounds fine, go ahead. 😀
Here is an update on the ghost robot which the robot will need to follow: This will be used to train the RL algorithm in accordance with Deep Mimic's policies (at least that's the hope for now)
- Some of the data is a bit choppy as it is using IK to follow mocap from the humanoid (which is hidden).
- The speed of motion of each joint, the robot base, and the joint calibration offsets are variable(needs tweaking, some of them are shown).
- There are issues with joint retargeting as of now(right foot of the robot), but, it will be tweaked later.
https://user-images.githubusercontent.com/37873142/108469979-39d47400-72af-11eb-8cac-b68310a0ced6.mp4
Warm Regards, Rohit Kumar J
We are at preliminary training by using IK. The code structure that was initially built upon was... unelegant, however, the agents are stand-alone, hence we may be able to re-build the environment files while reusing the agents(or perhaps use the agents in deep worlds:thinking: :thought_balloon: ). However, this may take some time.
https://user-images.githubusercontent.com/37873142/115606298-27070b80-a301-11eb-9919-9256c9cf718d.mp4
The checkpoint file(agent) here, is at 18 million samples, according to the deep mimic paper, it takes about 61 mil samples for the stock humanoid to achieve a perfect walking gait and 48 mil samples for Atlas. They also mention that it takes 2 days to train the humanoid. To get to the 18 mil samples mark as seen in this video, it took me 24 hours of training with 12(or was it 6? :thinking: ) cores (actually on my friend's PC). I think it needs some tuning to optimize the results.
Hopefully, I could begin developing this example on Webots once this is fully trained.
Warm Regards, Rohit Kumar J
PS: The sudden jump at 00:12 from the robot was me dragging the robot with the mouse :D