adityamity
adityamity
@dosu : In the above steps , Can i generate questions using RAGAS testset generator and run the llama_index evalutor on the quesion generated by ragas: ``` generator = TestsetGenerator.from_langchain(...
@dosu which version of llama-index includes metrics like precision , recall. I am currently using Version: 0.10.13.post1 of llama-index but when i ran the code: ``` from llama_index.core.evaluation import RetrieverEvaluator...
@dosu in the above code where you have created `testset_df` using `Ragas TestSetGenerator`, i see the issue in the below line of code provided by you: ``` # Convert the...
@dosu the contexts columns created by RAGAS does not have 'id', it only contain list of answers.
@dosu getting below error: eval_results = await retriever_evaluator.aevaluate_dataset(llamaindex_testset) File /anaconda/lib/python3.10/site-packages/llama_index/core/evaluation/retrieval/base.py:188, in BaseRetrievalEvaluator.aevaluate_dataset(self, dataset, workers, show_progress, **kwargs) AttributeError: 'list' object has no attribute 'mode'
@dosu how does the llama index evaluate the precision and recall using the `expected_id` and `retrieved_ids`. I see the below code for calculation : ``` class Precision(BaseRetrievalMetric): """Precision metric. The...
@dosu How can we limit the number of `retrieved_id`? if we have more retrieved_id then it will affect the precision and recall score?
@dosu if we have more retrieved_id then will it affect the precision and recall score?
@dosu why my retrieve_id length is always 20. I see 20 node_id in all my retrieve_id however i have not specify these numbers anywhere
@dosu @dosu below code i use for evaluating` ["hit_rate", "mrr" , "precision", "recall", "ap", "ndcg"]` ``` from llama_index.core.evaluation import ( generate_question_context_pairs, EmbeddingQAFinetuneDataset, ) qa_dataset = generate_question_context_pairs( nodes, llm=llm_model, num_questions_per_chunk=4 )...