LLamaSharp
LLamaSharp copied to clipboard
Is it possible to train or fine tune a model with LLamaSharp?
Hi,
How can i fine tune or train a llama model with LLamaSharp? I couldn't find a documentation about fine-tuning, training a model or using LLamaSharp Library for any other purposes.
Thank you.
Hi, LLamaSharp
cannot train or fine-tune a llama model till now. llama.cpp
added such an feature about 20 days ago, however it's not stable and only a demo at the begginning, as you can see in https://github.com/ggerganov/llama.cpp/blob/master/examples/baby-llama/baby-llama.cpp. We'll include this feature in the next 2-3 minor version to follow up llama.cpp
.
As for other usings, the integration with semantic-kernel
and BotSharp
are under work now, as you can see in semantic kernel integration and BotSharp. Through these two users can apply LLamaSharp
with more scenarios. For example semantic kernel provides rich features to handle LLM and chat AI; BotSharp makes it easy to deploy chat bot and is compatible with Chat Bot UI
.
Hi,
Nice to hear that LLamaSharp will be trainable with C#.
Thank you for the answer.
Hi,
Is there any news about this feature.
Hi,
Is there any news about this feature.
Not yet, I'll do it after completing openai style APIs and integrations with semantic-kernel under LLamaSharp v0.4.x
Hi,
Is there any news about this feature.
Not yet, I'll do it after completing openai style APIs and integrations with semantic-kernel under LLamaSharp v0.4.x
Thank you
This is very intriguing, how will this function work? Will it be possible to insert more sets of questions and answers into an existing model?
Hi, Is there any news about this feature.
Not yet, I'll do it after completing openai style APIs and integrations with semantic-kernel under LLamaSharp v0.4.x
@AsakusaRinne do you have any news about this feature? It's possible? Thanks!
Happy to see this being worked on, Thanks!
Me too, please let me know if you need any help with implementing this feature
Me too, please let me know if you need any help with implementing this feature
Hi all, I'm sorry for delaying this feature for such a long time. The only problem for me is that my time is limited. I've investigated the implementation in llama.cpp two months ago. However I'm not sure about how many changes after that.
Though I'm not able to support the whole feature as good as the inference now, I could have a try to support part of it first. Could you please tell me what you need to start a fine-tune? (for example, data-format, fine-tune process control, etc...)
Hi @AsakusaRinne ,
I apologize, I have just saw your post. I need data-format and some tips to make data useful for LLama.
Thank you
for me the same, some examples, data format, as you mention =)