pytorch-soft-actor-critic icon indicating copy to clipboard operation
pytorch-soft-actor-critic copied to clipboard

Exploding entropy temperature

Open reubenwong97 opened this issue 5 years ago • 10 comments

Hi,

When I set the automatic_entropy_tuning to be true in an environment with action space of shape 1, my entropy temperature explodes and increases exponentially to a magnitude of 10^8 before pytorch fails and crashes the run. Any ideas as to why it is so?

reubenwong97 avatar Oct 11 '20 10:10 reubenwong97

Did you ever manage to solve this problem? I'm encountering similar exploding temperature for my environment no matter what value I choose, eventually the policy reaches that entropy, and temperature starts increasing...

seneca-bit avatar Oct 31 '20 07:10 seneca-bit

Hi, not at the moment. I also compared the implementation to OpenAI's baselines and experimented with theirs but with similar results. Working on multiple things at the moment but will provide an update if I find anything.

reubenwong97 avatar Oct 31 '20 08:10 reubenwong97

I find it can be solved by delete "self.action_scale" in line 103 of model.py. In SAC paper, this item does not exist. But I'm not sure. Maybe I miss something

qijinshe avatar Dec 10 '20 08:12 qijinshe

I am facing the same problem. It is not solved yet, however, I can locate the issue:

In GaussianPolicy -> sample(): The policy output x_t is transformed by tanh(x_t). If you look at the shape of tanh() then you will find out that this is equal to 1 or -1 for most of the arguments besides a small area between -5 and 5. As a result, we receive a clipped y_t. The actions are "fixed" to the action_space constraints. The algorithm stops exploring and is therefore increasing the temperature-factor alpha. This leads firstly to an exploding temperature factor and secondly to an exploding critic loss.

The solution should be to replace action by x_t in the return statement of the sample() method. However, this is leading to an error (underneath):

I am trying to solve this and let you know if there are updates from my side. If you have any thoughts or inputs on this, please let me know.

Traceback (most recent call last): File "/Users/hammlerp/PycharmProjects/SupplyChainOptimization/src/Optimization/SoftActorCritic/sac_main.py", line 94, in action = agent.select_action(state) # Sample action from policy File "/Users/hammlerp/PycharmProjects/SupplyChainOptimization/src/Optimization/SoftActorCritic/sac.py", line 48, in select_action action, _, _ = self.policy.sample(state) File "/Users/hammlerp/PycharmProjects/SupplyChainOptimization/src/Optimization/SoftActorCritic/model.py", line 98, in sample normal = Normal(mean, std) File "/Users/hammlerp/opt/anaconda3/envs/SupplyChainOptimization/lib/python3.8/site-packages/torch/distributions/normal.py", line 50, in init super(Normal, self).init(batch_shape, validate_args=validate_args) File "/Users/hammlerp/opt/anaconda3/envs/SupplyChainOptimization/lib/python3.8/site-packages/torch/distributions/distribution.py", line 53, in init raise ValueError("The parameter {} has invalid values".format(param)) ValueError: The parameter loc has invalid values

chelydrae avatar Apr 23 '21 09:04 chelydrae

this is equal to 1 or -1 for most of the arguments besides a small area between -5 and 5

Can you explain a little bit more what you meant by that ? I am also trying to fix it on my side

thomashirtz avatar Apr 26 '21 09:04 thomashirtz

Hi, not at the moment. I also compared the implementation to OpenAI's baselines and experimented with theirs but with similar results. Working on multiple things at the moment but will provide an update if I find anything.

The OpenAI implementation have also the temperature exploding ?

thomashirtz avatar Apr 26 '21 09:04 thomashirtz

this is equal to 1 or -1 for most of the arguments besides a small area between -5 and 5

Can you explain a little bit more what you meant by that ? I am also trying to fix it on my side.

The issue is solved on my side. Are you using a custom environment? My problem was related to the environment. Make sure to punish your agent when it proposes a value outside the desired interval.

chelydrae avatar Apr 26 '21 10:04 chelydrae

this is equal to 1 or -1 for most of the arguments besides a small area between -5 and 5

Can you explain a little bit more what you meant by that ? I am also trying to fix it on my side.

The issue is solved on my side. Are you using a custom environment? My problem was related to the environment. Make sure to punish your agent when it proposes a value outside the desired interval.

I am confused, I thought that the aim of the squashed guaussian was to not go outside of the interval ? So the code is working on your side with the learned temperature and w/o modification ?

(I am trying to use it on the 'LunarLanderContinuous-v2' right now, the scores hover around 0 and to solve it you need to have >200)

thomashirtz avatar Apr 26 '21 11:04 thomashirtz

So the code is working on your side with the learned temperature and w/o modification ?

Yes. It works without modification. If I have time in the evening I'll check LunarLander for you.

chelydrae avatar Apr 26 '21 13:04 chelydrae

Oh, my bad, it is working 😶 Thank you though :)

thomashirtz avatar Apr 26 '21 13:04 thomashirtz