llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

How do we finetune the model with new data?

Open ekolawole opened this issue 1 year ago • 15 comments

Can we have a finetune.cpp or finetune.exe file to incorporate new data into the model? The use case will be to design an AI model that can do more than just general chat. It can become very knowledgeable in specific topics they are finetuned on. Also, after creating the finetune.exe , please ensure no GPU is required for the entire process. Because that is what makes this repo awesome in the first place.

ekolawole avatar Mar 24 '23 16:03 ekolawole

Sounds cool. But this is not on the short-term Roadmap.

Green-Sky avatar Mar 24 '23 16:03 Green-Sky

The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap

ekolawole avatar Mar 24 '23 16:03 ekolawole

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

rupakhetibinit avatar Mar 24 '23 17:03 rupakhetibinit

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I think it depents on the approach of fine-tuning. If the LoRa approach will be used (only the k, q, v memory layers, as far as I understand it correctly), then it could be made by CPU and we can transfer and share the LoRa models.

PriNova avatar Mar 24 '23 18:03 PriNova

loading only the lora part IS on the short-term roadmap https://github.com/ggerganov/llama.cpp/discussions/457

Green-Sky avatar Mar 24 '23 18:03 Green-Sky

There is the lxe/simple-llama-finetuner repo available for finetuning but you need a GPU with at least 16GB VRAM to finetune the 7B model.

leszekhanusz avatar Mar 27 '23 16:03 leszekhanusz

IS there a way to fine tune these models for reading my documents, etc utilizing cloud hardware but no openai, pinecone, non-free 3rd party dependencies? Code examples would be awesome (i've seen langchain's docs but they are not detailed enough, at leasdt not for me) @leszekhanusz @Green-Sky @PriNova @rupakhetibinit @ekolawole

Free-Radical avatar Apr 17 '23 18:04 Free-Radical

@Free-Radical , try vector storage, such as Weaviate. Your query string can contain text in a natural language, the response is based on vector similarity between that string and the documents in the storage. I also tried Vespa, but it didn't work at all. The reason is a design choice that I find questionable, see https://github.com/vespa-engine/pyvespa/issues/499 for details. There are other open source vector storage solutions too.

ch3rn0v avatar Apr 17 '23 19:04 ch3rn0v

@ch3rn0v Thank man, Weaviate looks good, better than going "raw" with FAISS. Will check out Vespa too.

Free-Radical avatar Apr 17 '23 23:04 Free-Radical

@Free-Radical you can look at https://github.com/tloen/alpaca-lora loading lora adapter support also merged today https://github.com/ggerganov/llama.cpp/pull/820, so i suggest you stay on the lora side (lower quality, but way, way faster to train).

Green-Sky avatar Apr 18 '23 00:04 Green-Sky

Since @xaedes contributed the backward version of the necessary tensors, this could now be withing reach. https://github.com/ggerganov/llama.cpp/pull/1360

(afaik this is the tracking issue for finetuning)

Green-Sky avatar May 15 '23 11:05 Green-Sky

The goal of these integrations is to enable academia adapt to the new era of AI, and to simplify the intricacies involved. So users should be able to finetune their models to suit their data needs. I was running the 30B this morning and the AI does not have important data about Langchain and other recent use cases from 2021 until now. I believe the data used to build the models are old. For my team, we are looking for a no GPU deployment like this one, that can also support finetuning. What can be done to move this request ahead on the Roadmap

I agree. It will be helpful to fine-tune LLaMA models only using llama.cpp on CPU.

Sovenok-Hacker avatar May 20 '23 15:05 Sovenok-Hacker

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility?

Sovenok-Hacker avatar May 20 '23 15:05 Sovenok-Hacker

Hopefully this will be possible someday. Like many others, I do not have the VRAM to fine tune or create a LORA for models.

I wonder if its possible to use the newly added CUDA acceleration in llama.cpp to fine tune quantized models so it doesn't take ages compared to a CPU only approach.

Dampfinchen avatar May 26 '23 06:05 Dampfinchen

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility?

I'm afraid it's not as simple as a little fine-tuning utility. While you may only want to add a small amount of data, the process of fine-tuning requires updating many weights in the model. Even a small change can have a significant impact on the entire model, so it typically involves retraining or adjusting a considerable portion of the weights

Tom-0727 avatar Jun 26 '23 09:06 Tom-0727

What you're talking about is training/finetuning which is theoretically possible on CPU but practically impossible/non-feasible on CPU only because you'll be training for literal months instead of days, you need a GPU to actually finetune this. This repository is only for inference/running the model.

I disagree. What if we need to add a little data? It will be done in hours, why not add a little fine-tuning utility?

I'm afraid it's not as simple as a little fine-tuning utility. While you may only want to add a small amount of data, the process of fine-tuning requires updating many weights in the model. Even a small change can have a significant impact on the entire model, so it typically involves retraining or adjusting a considerable portion of the weights

Yes, but a little amount of data means a little number of iterations. And also we can use LoRA or QLoRA to train only adapter and make fine-tuning simpler.

Sovenok-Hacker avatar Aug 14 '23 16:08 Sovenok-Hacker

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Apr 10 '24 01:04 github-actions[bot]