stable-baselines3 icon indicating copy to clipboard operation
stable-baselines3 copied to clipboard

[Question] Wrong scaled_action for continuous actions in `_sample_action()`?

Open KiwiXR opened this issue 2 years ago • 7 comments
trafficstars

❓ Question

The following code samples action for an off-policy algorithm. As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1). However, in line 399, the action is scaled again, which makes the valid action space even smaller. Is it a potential bug or just my misunderstanding?

https://github.com/DLR-RM/stable-baselines3/blob/5aa6e7d340108c8d109cd608468e8dc1fa33030d/stable_baselines3/common/off_policy_algorithm.py#L388-L399

Below I attach a modified version to demonstrate my idea.

    else:
        # Note: when using continuous actions,
        # we assume that the policy uses tanh to scale the action
        # We use non-deterministic action in the case of SAC, for TD3, it does not matter
        unscaled_action, _ = self.predict(self._last_obs, deterministic=False)
        # unscale in the continuous case, as the prediction is scaled
        if isinstance(self.action_space, gym.spaces.Box):
            unscaled_action = self.policy.unscale_action(unscaled_action)

    # Rescale the action from [low, high] to [-1, 1]
    if isinstance(self.action_space, gym.spaces.Box):
        scaled_action = self.policy.scale_action(unscaled_action)

Checklist

  • [X] I have checked that there is no similar issue in the repo
  • [X] I have read the documentation
  • [X] If code there is, it is minimal and working
  • [X] If code there is, it is formatted using the markdown code blocks for both code and stack traces.

KiwiXR avatar Jan 10 '23 15:01 KiwiXR

Hello,

As the comments indicate, the continuous actions obtained in line 395 should have already been scaled by tanh, which puts them in the range (-1, 1).

i think the comment is misleading. The output of predict() is in [low, high] (that's why the name is unscaled_action), the unscaling is done only when using squashing (which is the case for all off-policy algorithms): https://github.com/DLR-RM/stable-baselines3/blob/30a19848cef28f88a4cb38667ba74b3963be8f1d/stable_baselines3/common/policies.py#L347-L354

so the scaling afterward do scale the action back to [-1, 1] for storing it in the replay buffer.

I would be happy to receive a PR that update the comment ;)

EDIT: the comment is right (it talks about what is the assumption) but misleading

araffin avatar Jan 10 '23 16:01 araffin

Thank you for the clear explanation! I am pleased to PR when I come up with a better comment.

KiwiXR avatar Jan 11 '23 02:01 KiwiXR

I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.

AmorWwQ avatar Feb 08 '23 05:02 AmorWwQ

I wonder why we need to store the unscaled action in the replay buffer instead of the final action actually taken in the environment.

we need to store the action sampled from the underlying action distribution. For SAC, as it uses a tanh, the scaled/unscaled actions are the same most of the time (when the bounds are symmetric and in [-1, 1]). If you store the unscaled action (between [low, high]) then you will have issues when doing the gradient update (for instance you will compute the likelihood of sampling such action, and if low < -1 or high > 1, then it is zero because SAC only output actions in [-1, 1]).

For TD3/DDPG that don't rely on a probability distribution, you will have issues when you want to add noise to it or when you update the q-values with actions that cannot come from the actor (see https://github.com/vwxyzjn/cleanrl/issues/279 for the kind of issues that you may have).

Of course, you could do the scaling everywhere in the gradient update, but this is bug prone and having everything in [-1, 1] makes things simpler.

araffin avatar Feb 08 '23 13:02 araffin

Hello @araffin ,I'm curious, why use the following code to scale the action? return 2.0 * ((action - low) / (high - low)) - 1.0 In particular, why multiply by 2 and subtract 1?

wjh-scut avatar Apr 03 '23 09:04 wjh-scut

In particular, why multiply by 2 and subtract 1?

how would you do it otherwise?

araffin avatar Apr 03 '23 09:04 araffin

I think I have understood the reason for the above approach. The above scaling method is based on the min-max normalization, and a linear change is made, so the range of action can be limited to [-1,1].Thank you for your reply.

wjh-scut avatar Apr 03 '23 10:04 wjh-scut