langchain
langchain copied to clipboard
`OpenAIChat` returns only one result
OpenAIChat currently returns only one result even if n > 1:
full_response = completion_with_retry(self, messages=messages, **params)
return LLMResult(
generations=[
[Generation(text=full_response["choices"][0]["message"]["content"])]
],
llm_output={"token_usage": full_response["usage"]},
)
Multiple choices in full_response["choices"] should be used to create multiple Generations.
I also encountered this problem.
I reply this issue for person who have the same issue
you can generate multiple result by something like
model = ChatOpenAI(model="gpt-4", n=4)
message = model.generate(
messages = [messages],
)
β»note : Type of messages is List[List[BaseMessage]] , not List[BaseMessage] which we usually use for predict_messages
Hi, @smileehn
I'm helping the LangChain team manage their backlog and am marking this issue as stale. The OpenAIChat function is currently only returning one result even when multiple results are requested. The issue has been acknowledged by other users, with one providing a workaround for generating multiple results using a specific code snippet. The workaround involves using a specific type of messages and making adjustments to the model generation process.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself or the issue will be automatically closed in 7 days.
Thank you for your understanding and cooperation.