ChuanhuChatGPT
ChuanhuChatGPT copied to clipboard
[Bug]: 疑似未调用 azure openai api
这个bug是否已存在现有issue了?
- [X] 我确认没有已有issue,且已阅读常见问题。
错误表现
不支持Azure
复现操作
config.json里面填写如下配置:
"openai_api_type": "azure",
"azure_openai_api_key": "xxxxxxxx",
"azure_openai_api_base_url": "xxxxxxxxxxxxxx.openai.azure.com",
"azure_openai_api_version": "2023-07-01-preview",
"azure_deployment_name": "gpt-35-turbo-0613",
"azure_embedding_deployment_name": "text-embedding-ada-002",
"azure_embedding_model_name": "text-embedding-ada-002",
调用的还是openai。
查看源代码:
# modules/models/base_model.py
class ModelType(Enum):
@classmethod
def get_type(cls, model_name: str):
model_type = None
model_name_lower = model_name.lower()
....
elif "azure" in model_name_lower or "api" in model_name_lower:
model_type = ModelType.LangchainChat
return model_type
# modules/models/models.py
def get_model() -> BaseLLMModel:
# ...
# model_name是没有azure或者api选项的。
model_type = ModelType.get_type(model_name)
# ...
try:
# ...
elif model_type == ModelType.LangchainChat:
from .Azure import Azure_OpenAI_Client
model = Azure_OpenAI_Client(model_name, user_name=user_name)
# ...
我尝试手动在ONLINE_MODELS
中增加 Azure
选项, 页面运行后报错:
2023-11-14 11:53:40,957 [INFO] [base_model.py:457] 用户的输入为:hello?
Exception in thread Thread-6:
Traceback (most recent call last):
File "/usr/lib64/python3.9/threading.py", line 980, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.9/threading.py", line 917, in run
self._target(*self._args, **self._kwargs)
File "/opt/ChuanhuChatGPT/modules/models/base_model.py", line 875, in thread_func
self.model(messages=history, callbacks=[
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/base.py", line 551, in __call__
generation = self.generate(
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/base.py", line 309, in generate
raise e
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/base.py", line 299, in generate
self._generate_with_cache(
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/base.py", line 446, in _generate_with_cache
return self._generate(
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/openai.py", line 333, in _generate
for chunk in self._stream(
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/openai.py", line 305, in _stream
for chunk in self.completion_with_retry(
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/openai.py", line 272, in completion_with_retry
retry_decorator = _create_retry_decorator(self, run_manager=run_manager)
File "/opt/ChuanhuChatGPT/.venv/lib64/python3.9/site-packages/langchain/chat_models/openai.py", line 68, in _create_retry_decorator
openai.error.Timeout,
AttributeError: module 'openai' has no attribute 'error'
运行环境
- OS: Rocket Linux 9.2
- Browser: Chrome
- Gradio version: gradio==3.43.2
- Python version: 3.9.6
帮助解决
- [ ] 我愿意协助解决!
补充说明
No response
emm 主要是我没有Azure,所以很难测试这个…… 待我去搞个Azure账号
我也发现这个问题
这个bug修了么?
我在本地解决了,index_func.py中 改成
from langchain.embeddings import AzureOpenAIEmbeddings, OpenAIEmbeddings
if os.environ.get("OPENAI_API_TYPE", "openai") == "openai":
embeddings = OpenAIEmbeddings(openai_api_base=os.environ.get(
"OPENAI_API_BASE", None), openai_api_key=os.environ.get("OPENAI_EMBEDDING_API_KEY", api_key))
else:
embeddings = AzureOpenAIEmbeddings(deployment=os.environ["AZURE_EMBEDDING_DEPLOYMENT_NAME"], openai_api_key=os.environ["AZURE_OPENAI_API_KEY"],
model=os.environ["AZURE_EMBEDDING_MODEL_NAME"], azure_endpoint=os.environ["AZURE_OPENAI_API_BASE_URL"], openai_api_type="azure")
你试试