langchain icon indicating copy to clipboard operation
langchain copied to clipboard

`OpenAIChat` returns only one result

Open smileehn opened this issue 2 years ago β€’ 1 comments

OpenAIChat currently returns only one result even if n > 1:

full_response = completion_with_retry(self, messages=messages, **params)
return LLMResult(
    generations=[
        [Generation(text=full_response["choices"][0]["message"]["content"])]
    ],
    llm_output={"token_usage": full_response["usage"]},
)

Multiple choices in full_response["choices"] should be used to create multiple Generations.

smileehn avatar Mar 03 '23 21:03 smileehn

I also encountered this problem.

xgdyp avatar Mar 04 '23 06:03 xgdyp

I reply this issue for person who have the same issue

you can generate multiple result by something like

model = ChatOpenAI(model="gpt-4", n=4)

message = model.generate(
    messages = [messages],
)

β€»note : Type of messages is List[List[BaseMessage]] , not List[BaseMessage] which we usually use for predict_messages

Yongtae723 avatar Sep 09 '23 03:09 Yongtae723

Hi, @smileehn

I'm helping the LangChain team manage their backlog and am marking this issue as stale. The OpenAIChat function is currently only returning one result even when multiple results are requested. The issue has been acknowledged by other users, with one providing a workaround for generating multiple results using a specific code snippet. The workaround involves using a specific type of messages and making adjustments to the model generation process.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself or the issue will be automatically closed in 7 days.

Thank you for your understanding and cooperation.

dosubot[bot] avatar Dec 09 '23 16:12 dosubot[bot]