peft-fine-tuning-llm topic
DoRA
Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"
MOELoRA-peft
[SIGIR'24] The official implementation code of MOELoRA.
NOLA
Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"
AntGPT
Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?
APT
[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference
MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
HiFT
memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B