Kandra Akash
Kandra Akash
I will try this. But, I need to know why it is throwing that error even though I am not expecting any huge output.
I am giving a dataset where it contains columns named question, ground truth, contexts, answer and reference contexts Each column description is mentioned below question : user prompt ground truth...
So, is it like row-by-row the data will be sent to the each metric and the complete reasoning and explanation will be generated by the llm. So, it is taking...
Also, if it is row-by-row, each row will have max max limit of 16,383 tokens if we use gpt-4o right?
SO, I want to know what will be the reason that the max_tokens limit of 16,384 is getting exceeded when it is taking row-by-row?
from libraries import * evaluator_llm = LangchainLLMWrapper(AzureChatOpenAI( openai_api_key = api_key, azure_endpoint = azure_config["base_url"], azure_deployment = azure_config["model_deployment"], model = azure_config["model_name"], openai_api_version = azure_config["api_version"], validate_base_url = False, n = 1, max_tokens=16383 ))...
I am sending whole document content in the context field. Is that the issue why it is throwing the error? If that is the issue, that error should be with...
metrics.collections is not supporting ResponseRelevancy, SummarizationScore, answer_relevancy, faithfulness, context_recall, context_precision. Is there any other package for these libraries?
TypeError: All metrics must be initialised metric objects, e.g: metrics=[BleuScore(), AspectCritic()] why I am getting this error when I initialized all metrics as objects.
It is showing BlueScore, AspectCritic as example. I have never used it. This is my updated code from libraries import * openai_client = AzureChatOpenAI( openai_api_key = api_key, azure_endpoint = azure_config["base_url"],...