Krits
Krits
> I am experiencing the same issue using a local Llama 3 model even when the context does contain the answer. Any suggestions why faithfulness returns as Nan Any solution?
this is my def complete(self, prompt: str, **kwargs: Any) -> CompletionResponse: payload = { self.ollama.prompt_key: prompt, "model": self.ollama.model, "options": self.ollama._model_kwargs, "stream": False, **kwargs, }
LLMResult is from lanchain I am not uisng langchain--PromptValue, NameError: name 'PromptValue' is not defined
Ok got it
why is Empty DataFrame is returned? llm, embed_model = init_llm_ollama(base_url, bearer_token) def evaluate( dataset: List[LLMTestCase], metrics: List[Callable], llm: llm, embeddings: embed_model, raise_exceptions: bool = True, ) -> pd.DataFrame: results =...
Do ragas support llama3.1:8b model, getting NAN output
Python version - 3.11 Ragas- 0.1.16
share me the correct evaluation metrics
error while evaluating verify that the data type in your code aligns with the expected pyarrow data types.
'dict' object has no attribute 'rename_columns'