PGPortfolio icon indicating copy to clipboard operation
PGPortfolio copied to clipboard

Loss function only contains instantaneous reward but not cumulated reward

Open AchillesJJ opened this issue 5 years ago • 1 comments

As show in the nnagent.py, the author use average return of a batch as the loss function. However, it seems that such loss function only contains instantaneous reward, not average cumulated reward. To be specific, supposing we have a batch of experience as follows

mini_batch = $(s_t, a_t, r_t, ..., s_(t+T), a_(t+T), r_(t+T))$

AchillesJJ avatar Oct 19 '18 00:10 AchillesJJ

However, it seems that such loss function only contains instantaneous reward, not average cumulated reward.

If there is no commission fee, when the action won't affect the state transition, optimizing the immediate rewards is equivalent to optimizing the long-term value. And this point, together with the differentiable reward function, gives superior sample efficiency compared with common purpose RL. To deal with the commission fee, we treat it as a regularization term.

ZhengyaoJiang avatar Oct 19 '18 11:10 ZhengyaoJiang