mLoRA icon indicating copy to clipboard operation
mLoRA copied to clipboard

An Efficient "Factory" to Build Multiple LoRA Adapters

Results 25 mLoRA issues
Sort by recently updated
recently updated
newest added

can support llava model ?

We have an actively developing fork of the official m-LoRA repository, focusing on LoRA + MoE and its related improvements, maintained by the authors of m-LoRA. URL: [https://github.com/mikecovlee/mlora](https://github.com/mikecovlee/mlora)

enhancement

模型能读取到,但不能初始化ChatGLM模型中一些LORA (Low-Rank Adaptation)适配器的权重。请问该怎么解决

## Traceback ``` Traceback (most recent call last): File "/home/mikecovlee/work/multi-lora-fine-tune/mlora.py", line 175, in inference(config, model, tokenizer) File "/home/mikecovlee/work/multi-lora-fine-tune/mlora.py", line 106, in inference input_data = mlora.MultiLoraBatchData( TypeError: MultiLoraBatchData.__init__() got an unexpected...

bug
enhancement

Is the frame work support multi-gpu training? I want to use the frame work to train a 70B model, however, I did not find the parameter settings or methods for...

bug

I have been studying LoRA recently and I noticed that during pre-training, the word vectors change as the training progresses. However, what about when using LoRA for fine-tuning? Do the...

when I train the chatglm model, seems it can not produce the correct result. can anyone fix it? @waitfor-night

PLS provide the doc for "Merge LoRA weights and export model"

good first issue