Invalid n value (currently only n = 1 is supported
File "/usr/local/lib/python3.10/dist-packages/ragas/langchain/evalchain.py", line 166, in evaluate dataset_with_scores = self.metric.score(dataset, callbacks=callbacks) File "/usr/local/lib/python3.10/dist-packages/ragas/metrics/base.py", line 76, in score score = self._score_batch(dataset.select(batch), callbacks=group) File "/usr/local/lib/python3.10/dist-packages/ragas/metrics/_answer_relevance.py", line 123, in _score_batch results = self.llm.generate( File "/usr/local/lib/python3.10/dist-packages/ragas/llms/langchain.py", line 209, in generate return self._generate_multiple_completions(prompts, n, callbacks) File "/usr/local/lib/python3.10/dist-packages/ragas/llms/langchain.py", line 120, in _generate_multiple_completions result = self.llm.generate(ps, callbacks=callbacks) File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 534, in generate raise e File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 524, in generate self._generate_with_cache( File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 749, in _generate_with_cache result = self._generate( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 442, in _generate response = self.completion_with_retry( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 357, in completion_with_retry return self.client.create(**kwargs) File "/usr/local/lib/python3.10/dist-packages/openai/_utils/_utils.py", line 277, in wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py", line 643, in create return self._post( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1250, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 931, in request return self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1030, in _request raise self._make_status_error_from_response(err.response) from None openai.BadRequestError: Error code: 400 - {'detail': 'Invalid n value (currently only n = 1 is supported)'}
ragas=0.0.22
Hey there, @ARES3366! I'm here to assist you with any bugs, questions, or contributions you have. Let's tackle this together!
To resolve the "Invalid n value (currently only n = 1 is supported)" error when using the ragas library version 0.0.22, ensure that the dataset you are passing to the evaluate function has the correct format and values. Specifically, the dataset should have the following columns: question, contexts, answer, and ground_truth.
Here is an example of how to structure your dataset and call the evaluate function:
from ragas import evaluate
from datasets import Dataset
# Example dataset
data = {
'question': ['What is the capital of France?'],
'contexts': [['Paris is the capital of France.']],
'answer': ['Paris'],
'ground_truth': [['Paris']]
}
# Create a Dataset object
dataset = Dataset.from_dict(data)
# Call the evaluate function
result = evaluate(dataset)
print(result)
Ensure that the contexts and ground_truth columns are lists of strings, even if they contain only one string. This should resolve the error related to the "Invalid n value".
Additionally, the ragas library currently supports only n = 1 for text generation. This is enforced in the library's code, where multiple completions are only supported for specific language models. If you are using a language model that does not support multiple completions, you will need to ensure that n is set to 1 [1][2].
To continue talking to Dosu, mention @dosu.
@dosu I am using ragas 0.1.20. My LLM is Deepseek v2.0. I alse encoutered a problem like this. Error message is "Invalid n value (currently only n = 1 is supported)". and 'ground_truth': [['Paris']] could not be set like this. This will encountered another problem in ragas 0.1.20. my input: data= { 'question': ['When was the first super bowl?'], 'answer': ['The first superbowl was held on Jan 15, 1967'], 'contexts' : [['The First AFL–NFL World Championship Game was an American football game played on January 15, 1967, at the Los Angeles Memorial Coliseum in Los Angeles,']], 'ground_truth': ['The first superbowl was held on January 15, 1967'] } my code is just like this. I am using customize LLM and Embedding. dataset = Dataset.from_dict(data) score = evaluate(dataset,metrics=[faithfulness,context_recall,context_precision,answer_correctness,answer_similarity,answer_relevancy],llm=deepseek_llm,embeddings=embeddings)
have you found a way to mitigate this issues @ARES3366 @francescofan ?
I think this triggered by this line https://github.com/explodinggradients/ragas/blob/main/src/ragas/metrics/_answer_relevance.py#L143 which call https://github.com/explodinggradients/ragas/blob/main/src/ragas/prompt/pydantic_prompt.py#L222
where set n=strictness, on Deepseek API it only support n=1, but when we need to generate sample strictness>1 it will set n>1 too which then this error triggered.
Can we override the n value for specific LLM provider @jjmachan ?
Checkout the bypass_n in latest version. Should resolve this.
It seems the issue was answered, closing this now.