mLoRA
mLoRA copied to clipboard
An Efficient "Factory" to Build Multiple LoRA Adapters
we can provide an example to introduce how to use our system to improve the llama2 fine tune with less resources. https://www.kaggle.com/code/rraydata/multi-lora-example/notebook
Pls provide the doc to evaluate the lora fine tune model in the readme doc
we had better provide the webui for end users to find tune their model via multi-lora like this way: https://modelscope.cn/studios/hiyouga/LLaMA-Board/summary
we need model evaluation method.
Fining tuning multiple lora on a single GPU might encounter OOM issue. It is necessary to carefully adjust parameters such as batch_size and cutoff_len, but this still cannot guarantee to...