SLM-Lab icon indicating copy to clipboard operation
SLM-Lab copied to clipboard

how to improve the convergence performance of training loss?

Open williamyuanv0 opened this issue 2 years ago • 0 comments

Hi kengz, I find that the convergence performance of training loss (=value loss+policy loss) of ppo algorithem applied in game pong is poor (see Fig.1), but the corresponding mean_returns shows a good upward trend and reaches convergence (see Fig.2). That is why? how to improve the convergence performance of training loss? I tried many imporved tricks with ppo, but none of them worked. ppo_pong_t0_s0_session_graph_eval_loss_vs_frame Fig.1 ppo_pong_t0_s0_session_graph_eval_mean_returns_vs_frames Fig.2

williamyuanv0 avatar May 30 '22 07:05 williamyuanv0