llama.cpp
llama.cpp copied to clipboard
About dialogue training mode
For the finetune, the default is use text dataset(e.g. shakespeare.txt), does it support dialogue training mode?
It does not support a specific mode for dialogue training such a thing does not exist to my knowledge. Instruction tuning and RLHF could loosely be described as dialogue training.
Consider if your issue is an issue that is relevant to the development llama.cpp.
If it is not. I recommend reading up on database creation, and instruction tuning, here is a technical paper. There are also plenty of resources and guides on youtube and around the internet.
If it really is an issue relevant to the development of llama.cpp, please explain it in much greater detail. What do you want to achieve? What have you done so far to achieve it?
It does not support a specific mode for dialogue training such a thing does not exist to my knowledge. Instruction tuning and RLHF could loosely be described as dialogue training.
Consider if your issue is an issue that is relevant to the development llama.cpp.
If it is not. I recommend reading up on database creation, and instruction tuning, here is a technical paper. There are also plenty of resources and guides on youtube and around the internet.
If it really is an issue relevant to the development of llama.cpp, please explain it in much greater detail. What do you want to achieve? What have you done so far to achieve it?
Yes, actually I mean the instruction tuning. Is there a demo for it on llama.cpp?
Not to my knowledge or expectations. This is not the place to ask about this either. I recommend doing some research about what you're trying to do, elsewhere. There are plenty of good introductory articles, and videos on youtube, if you prefer.