PLEASE-n_n

Results 3 issues of PLEASE-n_n

I would like to train a chatbot with the Lora fine-tune on my own datasets. I used the 'text2text' structure, putting all questions in order as input and all answers...

I would like to try to deploy the Robin 7b model on the local machine. I downloaded the robin-7b_v2 model from hugging face and merged it with LLaMa 7b base...

Could you tell me how to use this conversation_template in the chatbot? I used a training dataset that follows the Llama-3 conversation_template, but there doesn’t seem to be an argument...

enhancement