mLoRA icon indicating copy to clipboard operation
mLoRA copied to clipboard

An Efficient "Factory" to Build Multiple LoRA Adapters

Results 25 mLoRA issues
Sort by recently updated
recently updated
newest added

we can provide an example to introduce how to use our system to improve the llama2 fine tune with less resources. https://www.kaggle.com/code/rraydata/multi-lora-example/notebook

Pls provide the doc to evaluate the lora fine tune model in the readme doc

good first issue

we had better provide the webui for end users to find tune their model via multi-lora like this way: https://modelscope.cn/studios/hiyouga/LLaMA-Board/summary

good first issue

Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to...

enhancement