ragas
ragas copied to clipboard
How can I ignore specific symbol when evaluate answer_correctness
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question My dataset here: {'question': 'what are you going', 'answer': "I'm going to have a meal", 'ground_truth': "I'm going to have a meal."}
Code Examples score = evaluate(dataset, metrics=[answer_correctness, answer_similarity]
Additional context but the result is: {'answer_correctness': 0.2490, 'answer_similarity': 0.9960}
when the dataset: {'question': 'what are you going', 'answer': "I'm going to have a meal.", 'ground_truth': "I'm going to have a meal."} the result {'answer_correctness': 1.0000, 'answer_similarity': 1.0000}
My llm: openai gtp4o. embedding: amazon.titan-embed-text-v2:0
### Question: Is there any way to score correctness higher. ignore specific symbol, like “---, ****” or any specific letter sometimes even the answer and truth are same, but the score is low. I not sure how to improve it. can truth support multiple cases? like: {'question': 'what are you going', 'answer': "I'm going to have a meal", 'ground_truth': ["I'm going to have a meal.", "xxxxxx", "xxxxxx"]}