lerobot icon indicating copy to clipboard operation
lerobot copied to clipboard

Port HIL SERL

Open AdilZouitine opened this issue 11 months ago • 8 comments

Implementing HIL-SERL

This PR implements the HIL-SERL approach as described in the paper. HIL-SERL combines Human in the loop intervention with reinforcement learning to enable efficient learning from human demonstrations.

The implementation includes:

  • Reward classifier training with pretrained architecture: Added a lightweight classification head built on top of a frozen, pretrained image encoder from HuggingFace. This classifier processes robot camera images to predict rewards, supporting binary and multi-class classification. The implementation includes metrics tracking with WandB.

  • Environment configurations for HILSerlRobotEnv: Added configuration classes for the HIL environment including VideoRecordConfig, WrapperConfig, EEActionSpaceConfig, and EnvWrapperConfig. These handle parameters for video recording, action space constraints, end-effector control, and environment-specific settings.

  • SAC-based reinforcement learning algorithm: Implemented Soft Actor-Critic (SAC) algorithm with configurable network architectures and optimization settings. The implementation includes actor and critic networks, policy configurations, temperature auto-tuning, and target network updates via exponential moving averages.

  • Actor-learner architecture with efficient communication protocols: Added actor server script that establishes connection with the learner, creating queues for parameters, transitions, and interactions. Implemented LearnerService class with gRPC for efficient streaming of parameters and transitions between components.

  • Replay buffer for storing transitions: Added ReplayBuffer class for storing and sampling transitions in reinforcement learning. Includes functions for random cropping and shifting of images, memory optimization, and batch sampling capabilities.

  • End-effector control utilities: Implemented input controllers (KeyboardController and GamepadController) that generate motion deltas for robot control. Added utilities for finding joint and end-effector bounds, and for selecting regions of interest in images.

  • Human intervention support: Added RobotEnv class that wraps robot interfaces to provide a consistent API for policy evaluation with integrated human intervention. Created PyTorch-compatible action space wrappers for seamless integration with PyTorch tensors.

Engineering Design Choices for HIL-SERL Implementation

Environment Abstraction and Entry Points

Currently, environment building for both simulation and real robot training is embedded within gym_manipulator.py. This creates a clean interface for robot interaction. While this approach works well for our immediate needs, future discussions may consider consolidating all environment creation through a single entry point in lerobot.common.envs.factory::make_env for consistency across the codebase and better maintainability.

Gym Manipulator

The gym_manipulator.py script contains the main RobotEnv class, which defines a gym-based interface for the Manipulator robot class. It also contains a set of wrappers that can be used on top of the RobotEnv class to provide additional functionality necessary for training. For example, the ImageCropResizeWrapper class is used to crop the image to a region of interest and resize it to a fixed size, EEActionWrapper is used to convert the end-effector action space to joint position commands, and so on.

The script contains three additional functions:

  • make_robot_env: This function builds a gymnasium environment with the RobotEnv base and the requested wrappers.
  • record_dataset: This function allows you to record the offline dataset of demonstrations by recording the robot's actions in the environment. This dataset can be used to train the reward classifier or as the offline dataset for the RL.
  • replay_dataset: This function allows you to replay a dataset which can be useful for debugging the action space on the robot.

You can record/replay a dataset by setting the arguments of HILSerlRobotEnvConfig in lerobot/common/envs/configs.py related to mode, dataset (more details in the guide).

Q: Why not use control_robot.py for collecting and replaying data?

A: Since we mostly use end-effector control and different teleoperation devices (gamepad, keyboard or leader), it is more convinent to collect and replay data using the gym env interface in gym_manipulator.py. After PR #777 we might be able to seamlessly change then teleoperation device and action space. Then we can revert to using control_robot.py for collecting and replaying data.

Optional Dataset in TrainPipelineConfig

The TrainPipelineConfig class has been modified to make the dataset parameter optional. This reflects the reality that while imitation learning requires demonstration data, pure reinforcement learning algorithms can function without an offline dataset. This makes the training pipeline more versatile and better aligned with various learning paradigms supported by HIL-SERL.

Consolidation of Implementation Files

For actor_server.py, learner_server.py, and gym_manipulator.py, we deliberately chose to create larger, more comprehensive files rather than splitting functionality across multiple smaller files. While this approach goes against some code organization principles, it significantly reduces the cognitive load required to understand these critical components. Each file represents a complete, coherent system with clear boundaries of responsibility.

Organization of Server-Side Components

We've placed multiple related files in the lerobot/script/server folder as a first step toward better organization. This groups related functionality for the actor-learner architecture. We're waiting for reviewer feedback before proceeding with further organization to ensure our approach aligns with the project's overall structure.

MultiAdamConfig for Optimizer Management

We introduced the MultiAdamConfig class to simplify handling multiple optimizers. Reinforcement learning methods like SAC typically rely on different networks (actor, critic, temperature) that are optimized at different frequencies and with different hyperparameters. This class:

  • Provides a clean interface for creating and managing multiple optimizers
  • Reduces error-prone boilerplate code when updating different networks
  • Enables more sophisticated optimization strategies with minimal code changes
  • Simplifies checkpoint saving and loading for training resumption

Gradient Flow Through Normalization

We removed the torch.no_grad() decorator from normalization functions to allow gradients to flow through these operations. This is essential for end-to-end training where normalized inputs need to contribute to the gradient computation. Without this change, backpropagation would be blocked at normalization boundaries, preventing the model from learning to account for input normalization during training.


How it was tested

  • We trained an agent on ManiSkill using this actor-learner architecture. The main task is the PushCube-v1. The point of the ManiSkill experiments is to validate that the implementation of the soft-actor critic is correct. As for this baseline we don't have any human interventions. We validate that the implementation can work with both sparse and dense rewards, with and without an offline dataset.

image

Reward with maniskill, training without offline data and human intervention

  • Another baseline is the Mujoco based simulation of the franka panda arm in the repo HuggingFace/gym-hil. We have implemented the ability to teleoperate the simulated robot with an external keyboard or gamepad device.

image image

Plots of the intervention rate and reward vs time during one training run. We are able to train a policy with 100% success between 10-30 minutes.

Other videos using this implemenation:

  • Training timelapse for a pick and lift task: https://www.youtube.com/watch?v=99sVWGECBas

  • Learning a policy with this implementation on a push cube task with the Piper X arm - https://www.youtube.com/watch?v=2pD1yhEvSgc

  • Learning a cube insertion task with the SO-100

https://github.com/user-attachments/assets/51525dc4-9db5-4aac-b656-c87aae97f154


How to check out & try it (for the reviewer) 😃

Follow this guide 😄

AdilZouitine avatar Jan 17 '25 08:01 AdilZouitine

Some tests are missed, but it's one the way - Some tests are missed, but it's on the way - https://github.com/huggingface/lerobot/pull/1074.

Networking part will be after that.

helper2424 avatar May 07 '25 12:05 helper2424

The cube insertion task video can not been watched

Ke-Wang1017 avatar May 07 '25 21:05 Ke-Wang1017

I think that https://github.com/michel-aractingi/lerobot-hilserl-guide should be mentioned somewhere in the README.md or in some docs. Whenever the current PR is merged - it will be impossible to find the guide and completely not clear how to use HIL-SERL.

Also, we removed all docs from the repo, but just imagine - the PR is merged - I open the repo and don't understand what is going on in HIL-SERL. I think that we should provide some basic documentation, it shouldn't be super extensive, maybe it should have some overview and just link to https://github.com/michel-aractingi/lerobot-hilserl-guide.

Also, would be great to mention HIL-SERL in the main README.

helper2424 avatar May 09 '25 15:05 helper2424

Except comments that I already have provided changes look good.

I also would like to add several points:

  1. Security for networking part. Probably that shouldn't be part of the current PR, but definitely should be implemented. At the current point any actor could connect to any learner. Its not very good, as in the future we can have learners which serve different networks. So users with actors with one type of nn could connect to learners which serve another nn. Another issue is that anybody who knows the learner address can steal launched there neural network. So, would be great to implement two things: a. The mechanism that check that learner and actor have the same neural network architecture. One of ways to do it - is generating a hash for NN, for example we can take a nn state, nullify all weights and calculate the hash of that dictionary. It looks like a good unique signature - if two nns are similar in context of the architecture - the signatures should be equal. We can send such signature whenever actor connects to learner and reject connection from lrarner side if signature is not the same for them. b. Authorization for GRPC. Need some mechanism to checking that actor authorized to connect the learner at all. Exits different ways, for example. We can use TLS ( but it will generate more traffic and load), tokens, or some custom mehcanism bassed on GRPC meta. Here are docs https://grpc.io/docs/guides/auth/, examples https://github.com/grpc/grpc/tree/master/examples/python/auth. Also, one of possible ways - https://chatgpt.com/share/681e26b0-b6bc-8002-b2fd-5803aff4c806 with meta base creds.
  2. System tests. Would be great to have some system tests (https://en.wikipedia.org/wiki/System_testing or end2end). Its shouldn't be a part of the current PR, but nice to have, as could save a lot of time in the future. The idea is to have a CI that could check end2end that scripts work without errors (for example learner and actor) and also could run training for some nn's and check that they converge. We wasted a lot of time checking basic things after changing the nns architecture. Would be great having the automation for that. As the way - we can create docker compose that will launch actor/server. Run them, and if any of them reports about the error - the CI stops and notify about the error. If everything works and after some time the collected reward is good enough - we mark CI as green. Such approach will allow to check everything together - network part, nn convergenes, and also the perfomance. To avoid long waiting we can use small tasks with small nn's. Also, such approach will allow testing other scripts, like train.py. It will speed up the development process x10s.

helper2424 avatar May 09 '25 16:05 helper2424

I think that https://github.com/michel-aractingi/lerobot-hilserl-guide should be mentioned somewhere in the README.md or in some docs. Whenever the current PR is merged - it will be impossible to find the guide and completely not clear how to use HIL-SERL.

Also, we removed all docs from the repo, but just imagine - the PR is merged - I open the repo and don't understand what is going on in HIL-SERL. I think that we should provide some basic documentation, it shouldn't be super extensive, maybe it should have some overview and just link to https://github.com/michel-aractingi/lerobot-hilserl-guide.

Also, would be great to mention HIL-SERL in the main README.

Agree that the guide should be included in the README. Also in the guide it will be great if gym_hil instruction is included if we plan to include the HIL environment.

Ke-Wang1017 avatar May 09 '25 16:05 Ke-Wang1017

Hi, thanks for the great work, if you want to cite hil-serl, here is the bibtex @article{luo2024hilserl, title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning}, author={Jianlan Luo and Charles Xu and Jeffrey Wu and Sergey Levine}, year={2024}, eprint={2410.21845}, archivePrefix={arXiv}, primaryClass={cs.RO} }

and the website: https://hil-serl.github.io/

Thanks

jianlanluo avatar May 12 '25 05:05 jianlanluo

Hi, thanks for the great work, if you want to cite hil-serl, here is the bibtex @Article{luo2024hilserl, title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning}, author={Jianlan Luo and Charles Xu and Jeffrey Wu and Sergey Levine}, year={2024}, eprint={2410.21845}, archivePrefix={arXiv}, primaryClass={cs.RO} }

and the website: https://hil-serl.github.io/

Thanks

Added in the README.md commit: https://github.com/huggingface/lerobot/pull/644/commits/b5869709ae458b23f945fb0472a1118672d6c2f5 :smile:

cc: @jianlanluo

AdilZouitine avatar May 16 '25 15:05 AdilZouitine

Nice! Last thing to figure out with @aliberts: where to put files in scripts/server? probably in lerobot/rl folder with (maybe) entry points for scripts in scripts.

@Cadene @AdilZouitine @aliberts

This folder have mix of different things, wdyt about moving them to different folders like

  • everything related to GRPC to lerobot/common/transport
  • everything related to HIL-SERL - to scripts/rl as @Cadene recommended
  • buffer (which is replay buffer) to lerboto/common/utils
  • kinematics to lerobot/common
  • end_effector_control_utils to lerobot/common/utils too maybe
  • find_joint_liimits to lerobot/scripts

helper2424 avatar May 16 '25 21:05 helper2424

Nice! Last thing to figure out with @aliberts: where to put files in scripts/server? probably in lerobot/rl folder with (maybe) entry points for scripts in scripts.

@Cadene @AdilZouitine @aliberts

This folder have mix of different things, wdyt about moving them to different folders like

  • everything related to GRPC to lerobot/common/transport
  • everything related to HIL-SERL - to scripts/rl as @Cadene recommended
  • buffer (which is replay buffer) to lerboto/common/utils
  • kinematics to lerobot/common
  • end_effector_control_utils to lerobot/common/utils too maybe
  • find_joint_liimits to lerobot/scripts

@AdilZouitine @michel-aractingi here is a PR that addresses the comment about the folder structure - https://github.com/huggingface/lerobot/pull/1167

helper2424 avatar May 30 '25 17:05 helper2424

Nice! Last thing to figure out with @aliberts: where to put files in scripts/server? probably in lerobot/rl folder with (maybe) entry points for scripts in scripts.

@Cadene @AdilZouitine @aliberts This folder have mix of different things, wdyt about moving them to different folders like

  • everything related to GRPC to lerobot/common/transport
  • everything related to HIL-SERL - to scripts/rl as @Cadene recommended
  • buffer (which is replay buffer) to lerboto/common/utils
  • kinematics to lerobot/common
  • end_effector_control_utils to lerobot/common/utils too maybe
  • find_joint_liimits to lerobot/scripts

@AdilZouitine @michel-aractingi here is a PR that addresses the comment about the folder structure - #1167

Rebased version is there https://github.com/huggingface/lerobot/pull/1178

helper2424 avatar Jun 01 '25 17:06 helper2424

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Given the size of this PR and our tight deadline for merging into main, I would address any findings worth discussing incrementally via follow-up tickets/PRs. The development team of this PR has already validated the functional aspect of the features here described through in-house experiments.

Once this PR lands in main, we should open tickets/PR to address the unresolved conversations and to review more in-depth the code. This also applies for https://github.com/huggingface/lerobot/pull/1263, which introduces last minutes changes in critical resource management design, for which not all conversations were fully resolved either. Namely: https://github.com/huggingface/lerobot/pull/1263#discussion_r2140201154

cc @AdilZouitine cc @michel-aractingi cc @helper2424

imstevenpmwork avatar Jun 11 '25 22:06 imstevenpmwork

Given the size of this PR and our tight deadline for merging into main, I would address any findings worth discussing incrementally via follow-up tickets/PRs. The development team of this PR has already validated the functional aspect of the features here described through in-house experiments.

Once this PR lands in main, we should open tickets/PR to address the unresolved conversations and to review more in-depth the code. This also applies for #1263, which introduces last minutes changes in critical resource management design, for which not all conversations were fully resolved either. Namely: #1263 (comment)

cc @AdilZouitine cc @michel-aractingi cc @helper2424

@imstevenpmwork sounds good. I also have one more https://github.com/huggingface/lerobot/pull/1266. We can merge it after the hackathon 👍

helper2424 avatar Jun 12 '25 09:06 helper2424

@AdilZouitine @michel-aractingi During data collection, Why does the cube spawn in the same position in all episodes. Is there a config to randomize the cube position during reset ?

snknitheesh avatar Jun 13 '25 22:06 snknitheesh