Liger-Kernel
Liger-Kernel copied to clipboard
GKD trainer + chunked JSD loss + FSDP
Hello Liger Kernel team,
First of all, thank you for making this project available! I’ve been exploring your codebase and tried to implement GKDTrainer using the chunked_jsd_loss similarly to how ORPOTrainer handles it. I’m now aiming to use Fully Sharded Data Parallel (FSDP) for both the teacher and student models but am unsure of the best way to integrate it.
I would greatly appreciate any guidance you could provide on:
Implementing the chunked JSD loss function for FSDP-enabled training – Are there recommended patterns or helper functions within the codebase that can simplify this process? Key code structures or APIs in the GKDTrainer – Which parts of GKDTrainer might need modification or extension to properly handle chunked JSD loss under FSDP? Best practices or potential pitfalls – Have you encountered any common issues or gotchas when combining chunked losses with FSDP that I should be aware of? Code snippets or references – If you have any example snippets, documentation references, or design patterns that illustrate how to properly handle teacher and student models together under FSDP, that would be incredibly helpful. Thank you in advance for your time and assistance! Any insights, tips, or examples you can share will help me get up and running much more quickly.
Additional Context:
I’m currently referencing the ORPOTrainer sample but see that it doesn’t fully address the GKD use case.
Implementing the chunked JSD loss function for FSDP-enabled training
The approach is similar to LigerORPOTrainer where we pass the models' lm_head weights and last hidden states to Liger{ORPO/JSD}Loss which should then return the expected loss. The caveat with FSDP however is that you need to unshard the FSDP root params before doing the forward pass of the Liger{ORPO/JSD}Loss (assuming you're using FSDP 1) as otherwise lm_head weight would be sharded among the gpus (ref: https://github.com/linkedin/Liger-Kernel/blob/2bb8dcfc18f10ff90f942f238b5cfe16c12749b6/src/liger_kernel/transformers/trainer/orpo_trainer.py#L18-L66).
Key code structures or APIs in the GKDTrainer – Which parts of GKDTrainer might need modification or extension to properly handle chunked JSD loss under FSDP?
Took a quick look through the GKDTrainer in trl -- I'd say you need to patch the compute_loss function to get the last hidden states and then do the unsharding as discussed above to finally get the loss through chunked JSD.
Best practices or potential pitfalls – Have you encountered any common issues or gotchas when combining chunked losses with FSDP that I should be aware of?
torch.compile gave us some issues when we were using mixed_precision training. A workaround was to force the inputs to LigerJSDLoss to be float32.
Code snippets or references – If you have any example snippets, documentation references, or design patterns that illustrate how to properly handle teacher and student models together under FSDP, that would be incredibly helpful.
Don't have the exact code snippet but can point you to two references:
- LigerORPOTrainer: Has the fsdp redirection to unshard_weights.
- [WIP] [Liger] liger JSD support: This has some wip patching code for doing what you need but this PR alone wont' work for FSDP