jiminy icon indicating copy to clipboard operation
jiminy copied to clipboard

[python/gym] Add termination conditions and reward components toolboxes.

Open duburcqa opened this issue 1 year ago • 1 comments

Currently, there is no toolbox with various pre-implemented toolboxes for termination conditions and reward components. It would be nice to provide some highly optimized yet modular implementations of the most common cases. Here is a template of what would be done for the reward:

from typing import Sequence, Tuple, Dict, Union, Callable
from abc import ABCMeta, abstractmethod

import numpy as np

class AbstractRewardCatalog(ABCMeta):
   def __init__(self, env: BaseJiminyEnv, reward_mixture: Dict[str, float]) -> None:
       self.env = env
       self.reward_mixture = {getattr(self, reward_component): weight}

   @abstractmethod
   def _initialize_buffers(self) -> None:
       pass

   @abstractmethod
   def _refresh_buffers(self) -> None:
       pass

   def compute_reward(self,
                      terminated: bool,
                      truncated: bool,
                      info: InfoType) -> float:
       reward_total = 0.0
       self._refresh_buffers()
       for reward_fun, weight in self.reward_mixture.items():
           reward_total += reward_fun(terminated, truncated, info)
       return reward_total

class WalkerRewardCatalog(AbstractRewardCatalog):
   def _initialize_buffers(self) -> None:
       self.foot_placements: Sequence[Tuple[np.ndarray, np.ndarray]] = ()

   def _refresh_buffers(self) -> None:
       pass

   def foot_placement(self,
                      terminated: bool,
                      truncated: bool,
                      info: InfoType) -> float:
       (left_foot_pos, _), (right_foot_pos, _) = self.foot_placements
       return np.linalg.norm(left_foot_pos - right_foot_pos)

duburcqa avatar Dec 08 '23 15:12 duburcqa

After thinking twice, I think it makes more sense provide a QuantityManager. This quantity manager can then be forwarded to independent reward components satisfying some callable protocole (which means it would be a lambda, a function or a class defining __class__ aka functor). It would more modular and easier to extend this way. To be computationally efficient, this quantity manager should heavily rely on caching. This cache must be clear manually before computing any reward component, then quantities would be computed only the first time it is requested, or never if not used. Here is a snipped:

class QuantityManager:
    def __init__(self, robot: BaseJiminyRobot, quantities: Dict[str, Callable[[], Any]]) -> None:
        self.robot = env.robot
        self.quantities = quantities
        self._cache : Dict[str, Any]

    def __getattr__(self, name: str) -> Any:
        return self.__cache.setdefault(name, self.quantities.get(name))

    def __item__(self, name:str) -> Any:
        return getattr(self, name)

    def reset() -> None:
        self.__cache.clear()

class RelativePose:
    def __init__(self, robot: BaseJiminyRobot, first_name: str, second_name: str) -> None:
        first_index = robot.pinocchio_model.getFrameId(first_name)
        second_index = robot.pinocchio_model.getFrameId(second_name)
        self.first_pose = self.robot.oMf[first_index]
        self.second_pose = self.robot.oMf[second_index]

    def __call__(self) -> pin.SE3:
        return self.first_pose.actInv(self.second_pose)

def foot_placement_reward(quantities: QuantityManager) -> float:
    return np.linalg.norm(quantities.foot_pose_rel.translation)

[...]

foot_pose_rel_qty = RelativePose(env.robot, "LeftSole", "RightSole")
quantities = QuantityManager(env.robot, {"foot_pose_rel": foot_pose_rel_qty}))

There would be a reward manager, taking a quantity manager and a set of reward components as input. It would expose a single compute_reward method, that would first call reset on the quantity manager, and then all reward components individually. Eventually, it would also feature a reset method that would call the reset method of each reward components if any. It may be beneficial to keep termination conditions and reward evaluation together, to avoid computing quantities twice. If so, then it would be more tricky to determine when to reset cached quantities automatically.

duburcqa avatar Feb 17 '24 17:02 duburcqa

This issue has been addressed. Closing. https://github.com/duburcqa/jiminy/pull/784 https://github.com/duburcqa/jiminy/pull/786 https://github.com/duburcqa/jiminy/pull/787 https://github.com/duburcqa/jiminy/pull/792

duburcqa avatar May 14 '24 07:05 duburcqa