LMFlow
LMFlow copied to clipboard
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
Does it support multi-gpu raining llama3 with lisa ?
Hey, Great to see LISA implemented here. As for the background, I am trying to finetune models with LORA other techniques on domain data but the Task i am doing...
I tried fine-tuning the llama-2-7b model using LoRa on an RTX3090 with 24GB, where the memory usage was only about 17GB. However, when I used the same configuration on an...
If there are multiple GPUs, using the lisa method is also a direct script./scripts/run_finetune_with_lisa.sh? Do I need to set multi-GPU parameters?
[New Feature] Could someone share the finetuned diffusion model which is good at 256x256 resolution?
Hi everyone, I want to use a stable-diffusion model which is good at 256 resolution. If someone could share a finetuned ckpt, that would be great! Thanks a lot!
What is the minimum requirement in order to fine tune small model like openlm-research/open_llama_3b and big model like llama2-7b
Hi author, when I use your project to fine-tune a local model, I get the following prompt "trust_remote_code=True" Where should I change the code? Can you provide an example?
I have a server with 128GB ram and it will be freeze when I follow the quick start procedure. On another server with 512GB ram it's fine. I think adding...
**Describe the bug** I think there might be something wrong with the current LISA implementation. There is no difference in training loss, no matter how many layers are active. Not...
can support https://github.com/haotian-liu/LLaVA model ?