ragas icon indicating copy to clipboard operation
ragas copied to clipboard

does prompt gets adjusted based on the LLM used ?

Open VpkPrasanna opened this issue 1 year ago • 4 comments

lets say i am using llama2 model to evaluate my RAG Applications does ragas takes care to format the prompt how llama2 understands ? like wrapping the prompt something like https://huggingface.co/blog/llama2#how-to-prompt-llama-2?

VpkPrasanna avatar Feb 08 '24 12:02 VpkPrasanna

Interesting point @VpkPrasanna , We use langchain underneath to wrap all the LLM but I am not sure if langchain handles this properly. I'll try and dig into it. But I would love to hear your thoughts

shahules786 avatar Feb 08 '24 19:02 shahules786

Interesting point @VpkPrasanna , We use langchain underneath to wrap all the LLM but I am not sure if langchain handles this properly. I'll try and dig into it. But I would love to hear your thoughts

langchain does not wrap the prompt based on the LLM we use , because it does not know which LLM model we pass so it has to be handled separately

VpkPrasanna avatar Feb 09 '24 04:02 VpkPrasanna

I met the same problem. When I used LLama2_7b_chat_hf model to evaluate my RAG results, I found some metrics like answer_correctness were always np.nan. I guess the code can not parse the results generated by LLama2.

xpxxx avatar Feb 17 '24 21:02 xpxxx

we can actually do this as part of prompt adaptation. in addition to language. But how it works internally is going to be hard.

1 method I can see is that we save prompts online and when someone adapts, we can download the strings. cheap and fast, harder to scale.

@shahules786 any other ideas you have?

jjmachan avatar Feb 19 '24 06:02 jjmachan