[Feature Request] 建议CUSTOM_MODELS 的-(减号)支持通配或正则,并支持直接减去某个运营商的所有模型
🥰 需求描述
鉴于现在模型列表越来越多 有些用不上的(或者没有api token)的模型需要隐藏 建议在CUSTOM_MODELS 的-语法上增加通配符或正则匹配 比如 -ernie-* 同时比如 我用不上百度的 可以直接类似于 -@baidu 就可以把百度所有模型隐藏掉
🧐 解决方案
建议在CUSTOM_MODELS 的-语法上增加通配符或正则匹配 比如 -ernie-* 同时比如 我用不上百度的 可以直接类似于 -@baidu 就可以把百度所有模型隐藏掉
📝 补充信息
No response
Bot detected the issue body's language is not English, translate it automatically.
Title: [Feature Request] It is recommended that the - (minus sign) of CUSTOM_MODELS supports wildcards or regularization, and supports direct subtraction of all models of a certain operator
🥰 Description of requirements
In view of the growing list of models now Some models that are not used (or do not have api token) need to be hidden It is recommended to add wildcards or regular matching to the - syntax of CUSTOM_MODELS, such as -ernie-* At the same time, for example, if I don’t use Baidu, I can directly hide all Baidu models by similar to -@baidu
🧐 Solution
It is recommended to add wildcards or regular matching to the - syntax of CUSTOM_MODELS, such as -ernie-* At the same time, for example, if I don’t use Baidu, I can directly hide all Baidu models by similar to -@baidu
📝 Supplementary information
No response
很好的建议,支持
Bot detected the issue body's language is not English, translate it automatically.
Great advice, support
后面会考虑根据你的环境变量是否配置了对应的apikey 来显示对应的模型 例如你压根没有配置baidu相关的,那么模型列表就不会显示baidu相关的模型
Bot detected the issue body's language is not English, translate it automatically.
Later, we will consider whether the corresponding apikey is configured according to your environment variables to display the corresponding model. For example, if you have not configured Baidu-related models at all, then the model list will not display Baidu-related models.
这个建议极好,期待
Bot detected the issue body's language is not English, translate it automatically.
This is great advice, looking forward to it
如果设置了 OPENAI_API_KEY,是不是有可能加个配置项(例如:GET_MODEL_LIST),启动时先请求 GET /v1/models 获取当前模型列表?
Bot detected the issue body's language is not English, translate it automatically.
Is it possible to add a configuration item (for example: GET_MODEL_LIST), and first request GET /v1/models to obtain the current model list when starting?
如果设置了
OPENAI_API_KEY,是不是有可能加个配置项(例如:GET_MODEL_LIST),启动时先请求GET /v1/models获取当前模型列表?
之前倒是有个老的 pr 是支持在前端手动获取可用模型列表:https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/pull/4815
Bot detected the issue body's language is not English, translate it automatically.
If
OPENAI_API_KEYis set, is it possible to add a configuration item (for example: GET_MODEL_LIST), and first requestGET /v1/modelsto obtain the current model list when starting?
There was an old PR that supported manually obtaining the list of available models on the front end: https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/pull/4815
期待这个功能,现在模型太多了,每次选择都要滑好久
Bot detected the issue body's language is not English, translate it automatically.
Looking forward to this function. There are too many models now, and it takes a long time to swipe every time I choose.
Just to follow up, any updates on this enhancement feature? There're more and more new models nowadays and it's really annoying to modify the CUSTOM_MODELS env variable in Vercel each time new models are added to deployment.
Just to follow up, any updates on this enhancement feature? There're more and more new models nowadays and it's really annoying to modify the CUSTOM_MODELS env variable in Vercel each time new models are added to deployment.
There is a PR https://github.com/ChatGPTNextWeb/NextChat/pull/5992 that hasn't been processed, and it's uncertain whether it is compatible with the current project now. Sad.
Can't agree more. Looking forward to an automated solution for managing the CUSTOM_MODELS env variable. It would definitely streamline our deployment process and save us a lot of time!