NextChat icon indicating copy to clipboard operation
NextChat copied to clipboard

[Feature] Plans to add model provider support

Open fred-bf opened this issue 1 year ago • 16 comments

There have been many discussions in the community regarding support for multiple models.

  • ChatGPTNextWeb#3484
  • ChatGPTNextWeb#3923
  • ChatGPTNextWeb#960
  • ChatGPTNextWeb#3431
  • ChatGPTNextWeb#3125

Here, we will gather NextChat's current support plans for different models and provide dynamic updates on the overall progress.

Firstly, we expect to separate the model-related logic from the frontend and may consider creating a separate JavaScript component to standardize it (this could be managed as an independent package). Afterwards, we will develop adapters for each model based on this component/package. We anticipate that each adapter will have at least the following basic capabilities: multimodality (text, images), token billing, and customizable model parameters (temperature, max_tokens, etc.).

We have roughly divided the work into the following parts:

NextChat UI Separation

  • [ ] Separation of UI components
  • [ ] Allow to register model providers in Settings page
  • [ ] Standardization of configuration, statistics, sharing, and other functionalities
  • [ ] Standardization model features, including function calling, agents loader, etc

Implementation of Multi-Model Providers

  • [ ] Basic multi-model package (using OpenAI GPT as an example)
  • [ ] Ollama
    • [x] Support localhost ollama deployment( https://docs.nextchat.dev/models/ollama )
    • [ ] Support manage local models
  • [ ] Derivatives of GPT
    • [x] Azure
    • [ ] ?OneAPI (to be determined)
  • [ ] Open-source models (considering support for a local App Model Manager)
    • [ ] Llama
    • [ ] Mistral
  • [ ] Closed-source models
    • [x] Claude
    • [ ] AWS Lex
    • [ ] Google Gemini
  • [ ] Other models
    • [ ] Wenxinyiyan
    • [ ] Zhipu
  • [ ] Hosting Platforms
    • [ ] Poe.com
    • [ ] together.ai
    • [ ] Cloudflare AI

Local Model Manager

  • [ ] Support for local model downloading and running

Server-Side Multi-Model Service

  • [ ] Support for independently deploying a multi-model service as NextChat's API proxy

Current implementation:

  • ChatGPTNextWeb#2603

fred-bf avatar Feb 10 '24 13:02 fred-bf

是否有计划支持 mistral.ai

WBinBin001 avatar Feb 27 '24 15:02 WBinBin001

Bot detected the issue body's language is not English, translate it automatically.


Are there any plans to support mistral.ai?

Issues-translate-bot avatar Feb 27 '24 15:02 Issues-translate-bot

@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

fred-bf avatar Feb 28 '24 03:02 fred-bf

@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

是否会支持 mistral.ai 平台的 api 和 key

WBinBin001 avatar Feb 28 '24 13:02 WBinBin001

Bot detected the issue body's language is not English, translate it automatically.


@WBinBin001 you can use mistral through Ollama https://docs.nextchat.dev/models/ollama

Will it support the api and key of the mistral.ai platform?

Issues-translate-bot avatar Feb 28 '24 13:02 Issues-translate-bot

The Mistral AI API models, which include Mistral-small-latest, Mistral-medium-latest, and Mistral-large-latest, particularly the Mistral Large model, have been ranked second, only behind GPT-4, which has an MMLU score of 86.4%. The Mistral Large model achieved a score of 81.2%, surpassing even GPT-4 Turbo, which scored 80.48%. This makes the model particularly interesting, and I support its inclusion in the most popular cross-platform chatbots, like ChatGPTNextWeb.

PPoooMM avatar Feb 29 '24 15:02 PPoooMM

Looking forward to supporting Claude 3

EarlyBedEarlyUp avatar Mar 05 '24 10:03 EarlyBedEarlyUp

Vote for moonshot

snowords avatar Mar 15 '24 07:03 snowords

I think that some functions should not be implemented in this repo. Different LLM backends can standardize the API through xusenlinzy/api-for-open-llm or BerriAI/litellm. ChatGPTNextWeb only needs to focus on the functionality of setting URLs and models for different conversations.

GrayXu avatar Mar 19 '24 15:03 GrayXu

Kimi is awesome. Support it!

Genuifx avatar Mar 23 '24 16:03 Genuifx

希望支持 AWS Bedrock Looking forward to supporting AWS Bedrock

0x5c0f avatar Mar 30 '24 12:03 0x5c0f

claude is supported in PR: https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/pull/4457

fred-bf avatar Apr 09 '24 08:04 fred-bf

I'd really like to see Gemini Pro 1.5 added. The Gemini Pro 1.0 model performs surprisingly well! as well as, the video processing capabilities of the Gemini Pro 1.5 are impressive. Thank you.

qiqitom avatar Apr 19 '24 08:04 qiqitom

考虑接入腾讯云的混元大模型嘛

lm379 avatar May 16 '24 12:05 lm379

Bot detected the issue body's language is not English, translate it automatically.


Are you considering connecting to Tencent Cloud’s Hunyuan large model?

Issues-translate-bot avatar May 16 '24 12:05 Issues-translate-bot

Will images / files upload support be included in v3?

younes-io avatar May 18 '24 09:05 younes-io

#5001

lloydzhou avatar Jul 14 '24 05:07 lloydzhou

Bot detected the issue body's language is not English, translate it automatically.


#5001

Issues-translate-bot avatar Jul 14 '24 05:07 Issues-translate-bot