Liger-Kernel icon indicating copy to clipboard operation
Liger-Kernel copied to clipboard

Potential Optimization for Preference Training with Prefix Sharing

Open austin362667 opened this issue 11 months ago • 0 comments

🚀 The feature, motivation and pitch

In Accelerating Direct Preference Optimization with Prefix Sharing, the authors proposed a efficient way to reduce total training tokens in paired preference optimization by combining the shared prompt with both chosen and rejected responses into a single sequence. As a result, the computation of the shared prompt is performed only once per training sample, eliminating redundant processing.

To do so, it leverages a custom attention mask. This mask masks out the region where the rejected response attends to the chosen response, ensuring that both responses are computed independently of each other.

To be more specific, please check the diagram from the paper below:

image

This method extends beyond DPO (demonstrated in the paper) and is compatible with all offline paired preference optimization algorithms, including ORPO and SimPO.

Alternatives

No response

Additional context

https://github.com/frankxwang/dpo-prefix-sharing

austin362667 avatar Dec 13 '24 04:12 austin362667