does prompt gets adjusted based on the LLM used ?
lets say i am using llama2 model to evaluate my RAG Applications does ragas takes care to format the prompt how llama2 understands ? like wrapping the prompt something like https://huggingface.co/blog/llama2#how-to-prompt-llama-2?
Interesting point @VpkPrasanna , We use langchain underneath to wrap all the LLM but I am not sure if langchain handles this properly. I'll try and dig into it. But I would love to hear your thoughts
Interesting point @VpkPrasanna , We use langchain underneath to wrap all the LLM but I am not sure if langchain handles this properly. I'll try and dig into it. But I would love to hear your thoughts
langchain does not wrap the prompt based on the LLM we use , because it does not know which LLM model we pass so it has to be handled separately
I met the same problem. When I used LLama2_7b_chat_hf model to evaluate my RAG results, I found some metrics like answer_correctness were always np.nan. I guess the code can not parse the results generated by LLama2.
we can actually do this as part of prompt adaptation. in addition to language. But how it works internally is going to be hard.
1 method I can see is that we save prompts online and when someone adapts, we can download the strings. cheap and fast, harder to scale.
@shahules786 any other ideas you have?