peft-fine-tuning-llm topic

List peft-fine-tuning-llm repositories

DoRA

122
Stars
3
Forks
Watchers

Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"

MOELoRA-peft

115
Stars
15
Forks
Watchers

[SIGIR'24] The official implementation code of MOELoRA.

NOLA

47
Stars
2
Forks
Watchers

Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"

AntGPT

18
Stars
2
Forks
Watchers

Official code implemtation of paper AntGPT: Can Large Language Models Help Long-term Action Anticipation from Videos?

APT

24
Stars
1
Forks
Watchers

[ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference

MoE-PEFT

22
Stars
4
Forks
Watchers

An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT

HiFT

18
Stars
2
Forks
Watchers

memory-efficient fine-tuning; support 24G GPU memory fine-tuning 7B