spirl icon indicating copy to clipboard operation
spirl copied to clipboard

The performance of the SAC algorithm in the project is significantly worse than the performance of SAC in the stable baseline3.

Open ynulihao opened this issue 2 years ago • 4 comments

The performance of the SAC algorithm in the project is significantly worse than the performance of SAC in the stable baseline. The training of the slide cabinet subtask in the kitchen environment using the SAC algorithm in this project fails to converge, while the loss function tends to exponentially explode. I have carefully examined the code of the project and the SAC in stable baseline3 and found no reason for this anomaly. https://github.com/clvrai/spirl/blob/master/spirl/rl/agents/ac_agent.py https://github.com/DLR-RM/stable-baselines3/blob/master/stable_baselines3/sac/sac.py

ynulihao avatar Apr 19 '22 13:04 ynulihao

Hi! Thanks for raising this issue! I have not run this comparison before so I can't tell you why exactly you are observing such different outcomes. Generally, small implementation details can have outsized effects for RL algorithms. From a glance it eg seems that StableBaselines uses observation normalization by default while we do not. Similarly other small differences might cause the performance differences, eg in the architecture (network size, choice of normalization, ...) or the learning algorithm (eg target entropy value, multiple experience collection workers vs single worker, ...). When I implemented SAC for this repo I verified that we can roughly match the performance of other SAC repos on a few standard OpenAI gym envs, but it is possible that other implementation choices work better on the sparser kitchen tasks you mentioned.

kpertsch avatar Apr 21 '22 21:04 kpertsch

Thanks for your reply, I experimented on KitchenAllTasksV0 environment using the SAC algorithm from your project. The training process is in wanai. One of the strange phenomena is that q_target, policy_loss, and critic_loss all increase exponentially, which is not found in other SAC implementations. What could be the reason for this.

ynulihao avatar Apr 22 '22 09:04 ynulihao

I am not sure why this is happening. Two things you could check:

(1) did you use Q-target clipping during training? Using this clipping can stabilize training by avoiding very large Q-errors. (you can use the existing flag clip_q_target = True)

(2) from the WandB plots it seems that the alpha-value is increasing a lot (which can explain why the Q-values grow too) -- you could try running with a fixed alpha value instead (you would need to sweep a couple different values to find a good one that balances the reward and entropy objectives --> you can use the existing flag fixed_alpha) -- I have sometimes found fixed alpha values to work better in the kitchen environment (maybe because it is sparse?)

kpertsch avatar May 02 '22 11:05 kpertsch

Hi @ynulihao , may I know the code you are running to get stable-baselines3 baseline?

gunnxx avatar Aug 16 '22 04:08 gunnxx