hybrid-sac icon indicating copy to clipboard operation
hybrid-sac copied to clipboard

Don't try and do the continuous action scaling in the Policy network.

Open DavidRNickel opened this issue 6 months ago • 0 comments

This isn't so much an issue with the code as it was user error that I'd like to help others avoid:

For anyone who is going to use this code, make sure that you DO NOT try and do the action scaling/bias in the policy model itself. I'm using a custom environment, so I figured it would be easier to do the scaling in Policy.get_action() as they do in the cleanrl implementation (lines 133 and 136). With the scaling in the policy, my code refused to converge even on very simple cases. Unless I'm very bad at synthesizing the two codes, I think there's some issue with either (i) erroneous values getting attached to the backpropagation graph or (ii) values being put into the replay buffer with(out) scaling when they should or shouldn't have it.

Solution: Leave the code completely alone and do all of your scaling inside the environment. For anyone else doing a custom environment, here's the easy solution:

extract_actions scale_bias

where as_high, as_low are the high and low parameters (of type np.array) that you pass to the environment's Box() space in init().

Also, thanks for making this code. It has helped me out a lot!

DavidRNickel avatar Jan 31 '24 20:01 DavidRNickel