imitation icon indicating copy to clipboard operation
imitation copied to clipboard

SyntheticGatherer often gives nearly deterministic feedback

Open timokau opened this issue 1 year ago • 1 comments

Bug description

The current implementation of the SyntheticGatherer in the preference comparisons module often chooses the trajectory with the higher reward nearly deterministically. This is because the Boltzmann-rational policy (or softmax) used for the SyntheticGatherer is very sensitive to the scale of the utilities, and the sum of rewards which are used as utilities tend to be quite large. The gatherer effectively implements this equation for feedback:

$$ P ( A \succ B) = \frac{\exp(\beta R(A))}{\exp(\beta R(A)) + \exp(\beta R(B))} $$

Where $A$ and $B$ are trajectories, $R(A)$ is the return of trajectory $A$ and $\beta$ is the temperature / rationality coefficient. Here are some example values with $\beta = 1$ to illustrate the problem:

R(A) R(B) Difference P(A > B) P(B > A)
1 1 0 0.5 0.5
1 2 1 0.27 0.73
1 3 2 0.12 0.88
1 4 3 0.05 0.95
1 5 4 0.02 0.98
1 7 6 0.0 1.0
1 8 7 0.0 1.0
1 9 8 0.0 1.0
1 10 9 0.0 1.0

As you can see, as soon as the difference in returns exceeds 10 the simulated feedback is nearly deterministic. Note that the probability only depends on the difference, the absolute values of the returns are irrelevant.

To fix this we could either normalize the returns or move away from the Boltzmann-rational model to something like the B-Pref oracle teachers.

timokau avatar Nov 28 '23 16:11 timokau

Hi @timokau. Thanks a lot for this hint! We will review the PC implementation in the coming weeks and then this information will be really valuable!

ernestum avatar Dec 11 '23 14:12 ernestum