Qwen
Qwen copied to clipboard
是否可以增加langchain agent调用Qwen模型的代码示例
起始日期 | Start Date
No response
实现PR | Implementation PR
No response
相关Issues | Reference Issues
No response
摘要 | Summary
agent调用Qwen模型的代码示例
基本示例 | Basic Example
agent_open_functions = initialize_agent(
tools=tools,
llm=llm, #llm为Qwen模型
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True,
)
缺陷 | Drawbacks
无
未解决问题 | Unresolved questions
No response
https://github.com/QwenLM/Qwen/blob/main/examples/function_call_examples.py#L209 这里有
我不能跑通示例:APIConnectionError llm = ChatOpenAI( model_name="Qwen", openai_api_base="http://localhost:8000/v1", openai_api_key="EMPTY", streaming=False, ) openai_api_base="http://localhost:8000/v1", openai_api_key="EMPTY", ?
APIConnectionError
有具备的报错信息吗?我知道的一个情况是:openai版本需要低于1.0.0,即 pip install "openai<1.0.0"
,langchain版本是否有版本暂不确定。
已经可以了,但是发现 14b和72b的agent=AgentType.OPENAI_FUNCTIONS 利用工具支持效果不好,一直不能调用上
Thought: 我会作答了。 Final answer: 我现在使用Google搜索引擎为您搜索关于2023年12月13日的新闻。请注意,这些结果可能会随着时间的推移而改变,并且可能不准确或过时。
[搜索结果]
很抱歉,由于我是2021年训练的语言模型,我无法访问或搜索到2023年的信息。我只能提供截至2021年的信息。如果您有任何其他问题,请随时告诉我。
我这边没试过 agent=AgentType.OPENAI_FUNCTIONS ,不排除有 langchain 解析错误的可能。我只在很久以前尝试过 AgentType.ZERO_SHOT_REACT_DESCRIPTION 。可以发一下具体的 query 和脚本,我们有空看下。
(qwen在这块的能力应该不至于很差,很多群友都能跑起来,qwen-agent项目下也有一些应用案例。)
测试发现qwen 对 chat-zero-shot-react-description Agent-type 支持效果最好。
@ZhuJD-China @JianxinMa 我這邊使用14B的結果也不甚理想, 會選擇工具, 但並不會實際使用工具 7B、72B是否對於ReAct prompt有加強學習, 甚至可以支援多輪ReAct?
from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain import SerpAPIWrapper
llm = ChatOpenAI(
temperature=0,
# max_tokens=90,
streaming= False,
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="/usr/src/app/model/Qwen-14B-Chat-AWQ"
)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
stop=["Observation:", "Observation:\n",'\nObservation:'],
handle_parsing_errors=True
)
result = agent_chain.run(
"現在時間是?",
)
from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain import SerpAPIWrapper
llm = ChatOpenAI(
temperature=0,
# max_tokens=90,
streaming= False,
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="/usr/src/app/model/Qwen-14B-Chat-AWQ"
)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
stop=["Observation:", "Observation:\n",'\nObservation:'],
handle_parsing_errors=True
)
result = agent_chain.run(
"北京現在的首長是?",
)
使用QWEN-7B測試, 看起來似乎有使用工具, 但實際上並沒有真的去呼叫SERP_API
from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType
from langchain import SerpAPIWrapper
llm = ChatOpenAI(
temperature=0,
# max_tokens=90,
streaming= True,
openai_api_key="EMPTY",
openai_api_base="http://localhost:8000/v1",
model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ"
)
agent_chain = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
stop=["Observation:", "Observation:\n",'\nObservation:'],
handle_parsing_errors=True
)
result = agent_chain.run(
"使用上網工具查詢現在天氣?",
)
This issue has been automatically marked as inactive due to lack of recent activity. Should you believe it remains unresolved and warrants attention, kindly leave a comment on this thread. 此问题由于长期未有新进展而被系统自动标记为不活跃。如果您认为它仍有待解决,请在此帖下方留言以补充信息。
使用QWEN-7B測試, 看起來似乎有使用工具, 但實際上並沒有真的去呼叫SERP_API
from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType from langchain import SerpAPIWrapper llm = ChatOpenAI( temperature=0, # max_tokens=90, streaming= True, openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ" ) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, stop=["Observation:", "Observation:\n",'\nObservation:'], handle_parsing_errors=True ) result = agent_chain.run( "使用上網工具查詢現在天氣?", )
![]()
openai_api_key="EMPTY", 请教下,这个可以任意写?下面两个有点疑问,选任意一个不就可以了吗?需要两个都填?开启预测服务了,还要告诉模型路径地址?谢谢 openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ"
使用QWEN-7B測試, 看起來似乎有使用工具, 但實際上並沒有真的去呼叫SERP_API
from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType from langchain import SerpAPIWrapper llm = ChatOpenAI( temperature=0, # max_tokens=90, streaming= True, openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ" ) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, stop=["Observation:", "Observation:\n",'\nObservation:'], handle_parsing_errors=True ) result = agent_chain.run( "使用上網工具查詢現在天氣?", )
![]()
openai_api_key="EMPTY", 请教下,这个可以任意写?下面两个有点疑问,选任意一个不就可以了吗?需要两个都填?开启预测服务了,还要告诉模型路径地址?谢谢 openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ"
openai_api_base model_name两个都要填,model_name填什么需要看预测服务的相关说明(vllm、ollama、或其他预测服务这个model_name的填发会不一样)。
使用QWEN-7B測試, 看起來似乎有使用工具, 但實際上並沒有真的去呼叫SERP_API
from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType from langchain import SerpAPIWrapper llm = ChatOpenAI( temperature=0, # max_tokens=90, streaming= True, openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ" ) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, stop=["Observation:", "Observation:\n",'\nObservation:'], handle_parsing_errors=True ) result = agent_chain.run( "使用上網工具查詢現在天氣?", )
![]()
openai_api_key="EMPTY", 请教下,这个可以任意写?下面两个有点疑问,选任意一个不就可以了吗?需要两个都填?开启预测服务了,还要告诉模型路径地址?谢谢 openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ"
openai_api_base model_name两个都要填,model_name填什么需要看预测服务的相关说明(vllm、ollama、或其他预测服务这个model_name的填发会不一样)。
您这个是模型路径地址,是填模型路径地址吗?
@chuangzhidan 我這邊的例子後端是用vllm架設的openai server, openai_api_base指得是後端API架設端口的endpoint model_name則是架設時設定的模型名子, 以Openai來說, 就是gpt-4、gpt-3.5-turbo... 這裡寫'/usr/src/app/model/Qwen-7B-Chat-AWQ', 是因為vllm若沒有設定架設時設定的模型名子, 預設是用路徑當作名子
若有疑問再請你發問, 以上說明~
使用QWEN-7B測試, 看起來似乎有使用工具, 但實際上並沒有真的去呼叫SERP_API
from langchain.chat_models import ChatOpenAI from langchain.agents import load_tools, initialize_agent, AgentType from langchain import SerpAPIWrapper llm = ChatOpenAI( temperature=0, # max_tokens=90, streaming= True, openai_api_key="EMPTY", openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ" ) agent_chain = initialize_agent( tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, stop=["Observation:", "Observation:\n",'\nObservation:'], handle_parsing_errors=True ) result = agent_chain.run( "使用上網工具查詢現在天氣?", )
![]()
openai_api_key="EMPTY", 请教下,这个可以任意写?下面两个有点疑问,选任意一个不就可以了吗?需要两个都填?开启预测服务了,还要告诉模型路径地址?谢谢 openai_api_base="http://localhost:8000/v1", model_name="/usr/src/app/model/Qwen-7B-Chat-AWQ"
openai_api_base model_name两个都要填,model_name填什么需要看预测服务的相关说明(vllm、ollama、或其他预测服务这个model_name的填发会不一样)。
我用falsk部署的,只有url,[http://10.10.18.15:6006/glm/predict,结果报错了:](http://10.0.18.15:6006/glm/predict%EF%BC%8C%E7%BB%93%E6%9E%9C%E6%8A%A5%E9%94%99%E4%BA%86%EF%BC%9A) llm = ChatOpenAI( model_name='/glm/predict', openai_api_base='http://10.10.18.15:6006/v1', openai_api_key='EMPTY', streaming=False, )
File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 767, in _interpret_response_line raise error.APIError( openai.error.APIError: HTTP code 404 from API (<!doctype html>
**404 Not Found ** # Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.)
服务日志如下: 10.0.14.10 - - [03/Jun/2024 15:35:55] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:35:59] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 15:47:29] "POST /glm/predict HTTP/1.1" 200 - 10.0.14.10 - - [03/Jun/2024 15:52:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:52:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:02] "GET /glm/predict,结果报错了: HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:03] "GET /favicon.ico HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:34] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:38] "POST /v1/chat/completions HTTP/1.1" 404 -
我用falsk部署的,只有url,[http://10.10.18.15:6006/glm/predict,结果报错了,不知道大佬知道怎么弄吗,谢谢:](http://10.0.18.15:6006/glm/predict%EF%BC%8C%E7%BB%93%E6%9E%9C%E6%8A%A5%E9%94%99%E4%BA%86%EF%BC%9A) llm = ChatOpenAI( model_name='/glm/predict', openai_api_base='http://10.10.18.15:6006/v1', openai_api_key='EMPTY', streaming=False, )
File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 767, in _interpret_response_line raise error.APIError( openai.error.APIError: HTTP code 404 from API (<!doctype html>
**404 Not Found ** # Not Found The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again. )服务日志如下: 10.0.14.10 - - [03/Jun/2024 15:35:55] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:35:59] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 15:47:29] "POST /glm/predict HTTP/1.1" 200 - 10.0.14.10 - - [03/Jun/2024 15:52:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:52:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:02] "GET /glm/predict,结果报错了: HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:03] "GET /favicon.ico HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:34] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:38] "POST /v1/chat/completions HTTP/1.1" 404 - 我用falsk部署的,只有url,http://10.10.18.15:6006/glm/predict,结果报错了: llm = ChatOpenAI( model_name='/glm/predict', openai_api_base='http://10.10.18.15:6006/v1', openai_api_key='EMPTY', streaming=False, )
File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 710, in _interpret_response self._interpret_response_line( File "/root/anaconda3/lib/python3.9/site-packages/openai/api_requestor.py", line 767, in _interpret_response_line raise error.APIError( openai.error.APIError: HTTP code 404 from API (<!doctype html>
服务日志如下: 10.0.14.10 - - [03/Jun/2024 15:35:55] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:35:59] "POST /glm/predict/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:39:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 15:47:29] "POST /glm/predict HTTP/1.1" 200 - 10.0.14.10 - - [03/Jun/2024 15:52:07] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 15:52:11] "POST /v1/chat/completions HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:02] "GET /glm/predict,结果报错了: HTTP/1.1" 404 - 192.168.28.101 - - [03/Jun/2024 16:08:03] "GET /favicon.ico HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:34] "POST /v1/chat/completions HTTP/1.1" 404 - 10.0.14.10 - - [03/Jun/2024 16:10:38] "POST /v1/chat/completions HTTP/1.1" 404 -