NextChat
NextChat copied to clipboard
[Bug] docker下环境变量CUSTOM_MODELS无效,DISABLE_GPT4有效,v2.9.12
docker下环境变量CUSTOM_MODELS无效,DISABLE_GPT4有效,v2.9.12
Bot detected the issue body's language is not English, translate it automatically.
Title: [Bug] The environment variable CUSTOM_MODELS under docker is invalid, DISABLE_GPT4 is valid, v2.9.12
The environment variable CUSTOM_MODELS under docker is invalid, DISABLE_GPT4 is valid, v2.9.12
-e CUSTOM_MODELS=-all,+gpt-3.5-turbo \ is that right?
-e CUSTOM_MODELS=-all,+gpt-3.5-turbo (right?)
I'm having the same problem with the same configuration code.
尝试使用 CUSTOM_MODELS="-all,+gpt-3.5-turbo" 试试。
已按照上面的配置尝试,依然存在问题
Bot detected the issue body's language is not English, translate it automatically.
I have tried following the configuration above, but the problem still exists.
尝试使用 CUSTOM_MODELS="-all,+gpt-3.5-turbo" 试试。
试了下还是不行。
图一是在docker容器内部查看CUSTOM_MODELS环境变量
图二是浏览器控制台查看
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all,+gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will appear. When hovering the mouse, the default model is still gpt-3.5-turbo.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
把图片中位置的模型改成你想用的默认模型就行了,就不用每次再点一下了,需要你改一下代码
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all,+gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will appear. When hovering the mouse, the default model is still gpt-3.5-turbo.
Just change the model of the position in the picture to the default model you want to use. You don't have to click again every time. You need to change the code.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
把图片中位置的模型改成你想用的默认模型就行了,就不用每次再点一下了,需要你改一下代码
我在 vercel 部署的,感觉这么改了下次自动变回去~
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all,+gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will appear. When hovering the mouse, the default model is still gpt-3.5-turbo.
Just change the model of the position in the picture to the default model you want to use. You don’t have to click it every time. You need to change the code! [Default Model 1](https://user-images.githubusercontent.com /125747262/284949166-b2a21c99-1dd3-4c85-b051-e1ad9dc93caf.png)
I deployed it in vercel, and I feel like it will automatically change back next time if I change it like this~
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
把图片中位置的模型改成你想用的默认模型就行了,就不用每次再点一下了,需要你改一下代码
我在 vercel 部署的,感觉这么改了下次自动变回去~
你可以先fork作者的项目到再修改代码,然后部署到vercel。
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all,+gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will appear. When hovering the mouse, the default model is still gpt-3.5-turbo.
Just change the model of the position in the picture to the default model you want to use. You don’t have to click it every time. You need to change the code! [Default Model 1](https://user-images.githubusercontent. com/125747262/284949166-b2a21c99-1dd3-4c85-b051-e1ad9dc93caf.png)
I deployed it in vercel, and I feel like it will automatically change back next time if I change it like this~
You can fork the author's project first, modify the code, and then deploy it to vercel.
需要看一下为什么这几行逻辑没有生效:
https://github.com/Yidadaa/ChatGPT-Next-Web/blob/f2485931d9b3680234f4816f4526759c8d4b741e/app/components/chat.tsx#L444
Bot detected the issue body's language is not English, translate it automatically.
You need to take a look at why these lines of logic don’t take effect:
https://github.com/Yidadaa/ChatGPT-Next-Web/blob/f2485931d9b3680234f4816f4526759c8d4b741e/app/components/chat.tsx#L444
-all 这个配置值没生效
Bot detected the issue body's language is not English, translate it automatically.
-all This configuration value does not take effect
请问一下 CUSTOM_MODELS 修复了吗
Bot detected the issue body's language is not English, translate it automatically.
Could you please tell me if CUSTOM_MODELS has been fixed?
same issue
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
把图片中位置的模型改成你想用的默认模型就行了,就不用每次再点一下了,需要你改一下代码
我在 vercel 部署的,感觉这么改了下次自动变回去~
你可以先fork作者的项目到再修改代码,然后部署到vercel。
感谢,按照这个搞定了,指定了1106模型,不用每次换设备都要在客户端指定一次模型了
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all, +gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will appear. When hovering the mouse, the default model is still gpt-3.5-turbo.
Just change the model of the position in the picture to the default model you want to use. You don’t have to click it every time. You need to change the code! [Default Model 1](https://user-images.githubusercontent .com/125747262/284949166-b2a21c99-1dd3-4c85-b051-e1ad9dc93caf.png)
I deployed it in vercel, and I feel like it will automatically change back next time if I change it~
You can fork the author's project first, modify the code, and then deploy to vercel.
Thanks, I solved it according to this. I specified the 1106 model. I don’t need to specify the model on the client every time I change the device.
需要看一下为什么这几行逻辑没有生效:
https://github.com/Yidadaa/ChatGPT-Next-Web/blob/f2485931d9b3680234f4816f4526759c8d4b741e/app/components/chat.tsx#L444
这有生效,但似乎chatStore -> Session未更新,详见: https://github.com/Yidadaa/ChatGPT-Next-Web/pull/3356/files
Bot detected the issue body's language is not English, translate it automatically.
You need to take a look at why these lines of logic don’t take effect:
https://github.com/Yidadaa/ChatGPT-Next-Web/blob/f2485931d9b3680234f4816f4526759c8d4b741e/app/components/chat.tsx#L444
This works, but it seems that the Session is not updated. For details, see: https://github.com/Yidadaa/ChatGPT-Next-Web/pull/3356/files
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo(对吗?
我在使用相同的配置代码时遇到了同样的问题。
你怎么解决的?我环境变量哪里设置了-all,+gpt-3.5-turbo-1106 实际上,我每次新建聊天,在窗口底部还是要选一下gpt-3.5-turbo-1106 ,不然就提示错误。鼠标 hover 上去默认的模型还是gpt-3.5-turbo
把图片中位置的模型改成你想用的默认模型就行了,就不用每次再点一下了,需要你改一下代码
我在 vercel 部署的,感觉这么改了下次自动变回去~
你可以先fork作者的项目到再修改代码,然后部署到vercel。
感谢,按照这个搞定了,指定了1106模型,不用每次换设备都要在客户端指定一次模型了
@shaolinah 我有个问题啊,老哥。我如果在这里修改了,我 fork 的库同步上游Yidadaa的库的时候,会不会覆盖这个呢。我是 vercel 部署的。
Bot detected the issue body's language is not English, translate it automatically.
-e CUSTOM_MODELS=-all,+GPT-3.5-Turbo (right?
I had the same problem using the same configuration code.
How did you solve it? Where do I set the environment variable -all, +gpt-3.5-turbo-1106. In fact, every time I create a new chat, I still have to select gpt-3.5-turbo-1106 at the bottom of the window, otherwise an error will be prompted. When hovering the mouse, the default model is still gpt-3.5-turbo.
Just change the model of the position in the picture to the default model you want to use. You don’t have to click it every time. You need to change the code! [Default Model 1](https://user-images. githubusercontent.com/125747262/284949166-b2a21c99-1dd3-4c85-b051-e1ad9dc93caf.png)
I deployed it in vercel, and I feel like it will automatically change back next time if I change it~
You can fork the author's project first, modify the code, and then deploy to vercel.
Thanks, I solved it according to this. I specified the 1106 model. I don’t need to specify the model on the client every time I change the device.
@shaolinah I have a question, bro. If I make changes here, will this be overwritten when my forked library synchronizes with the upstream Yidadaa library. I deployed it with vercel.
Docker镜像什么时候可以更新呢?似乎编译部署的应用已经不存在此问题了但是Docker镜像仍然存在问题😭
Bot detected the issue body's language is not English, translate it automatically.
When can the Docker image be updated? It seems that this problem no longer exists for compiled and deployed applications, but there is still a problem with the Docker image😭
