langchaingo icon indicating copy to clipboard operation
langchaingo copied to clipboard

Bug in OpenAI client createChat function

Open Pramodh-G opened this issue 1 year ago • 3 comments

In the openAI client, the model parameter while sending an HTTP request is set to c.Model, where c is the openaiClient. Please refer here

It should have been payload.Model, because the wrapper function CreateChat at here sets the Model name correctly.

The effect of this bug is that, users have to initialise the client with the model they want to use. Else, the Call method on a chat/completion object will not work. It does not give the user the permission to change models will generating responses from the Llm.

Pramodh-G avatar Nov 28 '23 08:11 Pramodh-G

I can send in a fix for this if you are interested @tmc

Pramodh-G avatar Nov 28 '23 08:11 Pramodh-G

I can send in a fix for this if you are interested @tmc

Yes please submit a fix in a PR.

tmc avatar Dec 20 '23 21:12 tmc

Hi @tmc , I'm sending in a bugfix for this. Meanwhile, I did notice a lot of inconsistencies in the repo. Please feel free to correct me if I am wrong.

  • In llms/ernie/internal/ernieclient/ernieclient.go, there is a CreateChat function that never gets used anywhere.
  • In Here, the modelPath should be sent as another parameter in ernieClient.ChatCompletionRequest, configurable through options. I noticed that it was the case in OpenAI's internal client.

The code around setting the modelName/Path, felt very convoluted, as there was lot of functions that kept overwriting the default model for ernie clients.

  • For the palmclient here, I see that model Names have been hardcoded as constants. Do we want to make this configurable?

Tagging contributors to palm and ernie for more visibility: @sxk10812139 @FluffyKebab

Thank you so much for your contributions, I would love to be of more help here!

Pramodh-G avatar Apr 06 '24 08:04 Pramodh-G