adityamity
adityamity
@dosu : In the below code, i am passing my query_engine ``` from llama_index.core.evaluation import RetrieverEvaluator metrics = ["hit_rate", "mrr"]#, "precision", "recall", "ap", "ndcg"] retriever_evaluator = RetrieverEvaluator.from_metric_names( metrics, retriever=query_engine )...
@dosu I am using the below code for evaluating `["hit_rate", "mrr", "precision", "recall", "ap", "ndcg"]`: ``` from llama_index.core.evaluation import ( generate_question_context_pairs, EmbeddingQAFinetuneDataset, ) qa_dataset = generate_question_context_pairs( nodes, llm=llm, num_questions_per_chunk=2 )...
@dosu in the provided solution, context recall and precision is not been evaluated. I want to evaluate Retrieval with precision and recall
@dosu - in the RAGChecker, how can we create gt_answer and user_query from the nodes. I want to run the RAGCheker for set of user_query and gt_answer and get the...
@dosu - what does this `generate_question_context_pairs` contains and how does `RetrieverEvaluator` uses this for evaluating precision and recall?
@dosu : As i checked the Precision Class , i want to understand from where does it take `expected_ids` and `retrieved_ids`. From where and how should i get these `expected_ids`...
@dosu How can i create a data with expected_ids or ground truth and retrieved_ids. Do i need to use nodes to create this. Can you provide me the code to...
@dosu How would i know that the evaluation is working fine on multi context queries.Is there any checks present. Please help me with the step by step process on evaluating...
@dosu, how can i save the eval_results as dataframe
@dosu i want to convert eval_results to dataframe which should contain columns like query, expected_ids, expected_texts, retrieved_ids, retrieved_texts, metric_dict