arrows icon indicating copy to clipboard operation
arrows copied to clipboard

Added support for Chat Completion Model

Open onlylonly opened this issue 1 year ago • 2 comments

I've added support to 'chat' model, and the ability to switch between chat and completion type model.

Added example in config.ts

config.ts:

  • Added SYSTEM_PROMPT, MODEL_TYPE, MODEL and 3 configuration examples
  • Added a comment to specify the model name used by the API and updated the MODEL constant.

main.ts:

  • Updated import statement to include MODEL from config.ts.
  • Changed the model parameter in streamText function to use the MODEL constant from config.ts.
  • Updated import statement to include MODEL_TYPE, SYSTEM_PROMPT from config.ts.
  • Added support to chat type model
  • Added ability to switch between 'chat' and 'completion' model
  • commented n_predict: MAX_TOKENS, cache_prompt: true in line 63

onlylonly avatar Jun 24 '24 01:06 onlylonly

currently PARAMS from config.ts seems to be unused. Should we remove it?

onlylonly avatar Jun 24 '24 02:06 onlylonly

Thanks for the effort, but this is not the right way to broaden loader support. The right way is to add support for the text completion endpoint to those loaders (which I believe is currently happening in Ollama). Chat completion is a semantic mismatch for text completion, and using it to do the latter is a hack that I don't want in the code.

The fact that OpenAI restricts GPT-4 to the chat completion endpoint is unfortunate (and clearly intended to further limit what users can do with their models), but not a sufficient reason for doing things the wrong way.

As for local models, they all support text completion ("chat completion" is just text completion with a specific template), so no changes are required to use e.g. Llama 3 Instruct. The only problem is that some loaders, notably Ollama and Kobold, don't expose that endpoint, but that is their bug to fix.

currently PARAMS from config.ts seems to be unused. Should we remove it?

It's not unused, it's included into the params variable, though somehow you seem to have removed it in this PR.

p-e-w avatar Jun 24 '24 10:06 p-e-w