ragas
ragas copied to clipboard
Context_recall returns 1 when context is empty
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug all instances in my evaluation set where context is [""], are getting context_recall=1. I am using GPT3.5 as the judge.
Code to Reproduce just feed an example with context = [""] and compute the metric.
Expected behavior I expect the metric value to be 0.
I wonder the same!
I evaluated 69 questions without context (just ['']) on context_recall and context_precision.
Of those 138 test cases, I get 88 with a non-zero score! Scores range anywhere from 0.2 to 1.
There seems to be a bug regarding the calculation of these metrics.