pytorch-trpo
pytorch-trpo copied to clipboard
Bootstrapping the value function?
Currently, the target for the value function is the discounted sum of all future rewards. This gives unbiased estimate but will result in higher variance. An alternative is to use bootstrapped estimate, i.e. something like target[i] = rewards[i] + gamma * prev_values * masks[i]
Bootstrapping is often preferred due to low variance, even though it results in biased gradient estimate.
This is done in Tianshou, check it out at https://github.com/thu-ml/tianshou If you are still interested. @XuchanBao