There is a problem with the Langchian module in the first part.
/data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num 10 --model_name llama-2-13b-chat /opt/conda/envs/autoact/lib/python3.9/site-packages/langchain/init.py:29: UserWarning: Importing OpenAI from langchain root module is no longer supported. Please use langchain_community.llms.OpenAI instead. warnings.warn( /opt/conda/envs/autoact/lib/python3.9/site-packages/langchain/init.py:29: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead. warnings.warn( /opt/conda/envs/autoact/lib/python3.9/site-packages/langchain/chat_models/init.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:
from langchain_community.chat_models import ChatOpenAI.
To install langchain-community run pip install -U langchain-community.
warnings.warn(
Traceback (most recent call last):
File "/data/AutoAct/Self_Instruct/data_generation.py", line 11, in
hello, please try langchain==0.0.299
Thank you. But after I changed the version of Langchain to 0.0.299, a new question about openai appeared:
/data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num 10 --model_name llama-2-13b-chat
have generated num 2, all 800 need to be generated all
Traceback (most recent call last):
File "/data/AutoAct/Self_Instruct/data_generation.py", line 177, in
Oh, I notice that there is no need for openai in the requirement file. But when I removed the openai module, there still have a question about openai:
/data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num 10 --model_name llama-2-13b-chat
Traceback (most recent call last):
File "/data/AutoAct/Self_Instruct/data_generation.py", line 177, in pip install openai. (type=value_error)
please try to install openai==0.28.0
Thank you, but there are some new issues:
/data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num 10 --model_name llama-2-13b-chat
have generated num 2, all 800 need to be generated all
Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connectionpool.py", line 789, in urlopen response = self._make_request( File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connectionpool.py", line 495, in _make_request conn.request( File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connection.py", line 441, in request self.endheaders() File "/opt/conda/envs/autoact/lib/python3.9/http/client.py", line 1280, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/opt/conda/envs/autoact/lib/python3.9/http/client.py", line 1040, in _send_output self.send(msg) File "/opt/conda/envs/autoact/lib/python3.9/http/client.py", line 980, in send self.connect() File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connection.py", line 279, in connect self.sock = self._new_conn() File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connection.py", line 214, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f79a57e8a00>: Failed to establish a new connection: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/opt/conda/envs/autoact/lib/python3.9/site-packages/requests/adapters.py", line 667, in send resp = conn.urlopen( File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/connectionpool.py", line 843, in urlopen retries = retries.increment( File "/opt/conda/envs/autoact/lib/python3.9/site-packages/urllib3/util/retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79a57e8a00>: Failed to establish a new connection: [Errno 111] Connection refused'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/opt/conda/envs/autoact/lib/python3.9/site-packages/openai/api_requestor.py", line 596, in request_raw result = _thread_context.session.request( File "/opt/conda/envs/autoact/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/opt/conda/envs/autoact/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/opt/conda/envs/autoact/lib/python3.9/site-packages/requests/adapters.py", line 700, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /v1/chat/completions (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f79a57e8a00>: Failed to establish a new connection: [Errno 111] Connection refused'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/data/AutoAct/Self_Instruct/data_generation.py", line 177, in
I am doing this on a remote server, is this due to some kind of network mapping issue or do I need to replace the openai api?
have you deployed llama-2-13b-chat on local? If not, you should first deploy the model via https://github.com/lm-sys/FastChat/blob/main/docs/langchain_integration.md, then you can use model to generate by api call
In my case, I deployed each agent in local and it works. If you deploy in local you should add the controller port number in the restful API server
hi, do you have any further issue?
Thank you for help! I've progressed to the Group Planning stage and want to use the trained Agent Groups for benchmarking. But there are some issues:
python Self_Plan/Group_Planning/run_eval.py
--agent_name ZeroshotThink_HotPotQA_run_Agent
--plan_agent /data/AutoAct/Self_Plan/Train/lora/HotpotQA/13b-plan-5-epoch
--tool_agent /data/AutoAct/Self_Plan/Train/lora/HotpotQA/13b-tool-5-epoch
--reflect_agent data/AutoAct/Self_Plan/Train/lora/HotpotQA/13b-reflect-5-epoch
--max_context_len 4096
--task HotpotQA
--task_path Self_Plan/Group_Planning/benchmark_run/data/hotpotqa
--save_path Self_Plan/Group_Planning/output/13b
/opt/conda/envs/autoact/lib/python3.9/site-packages/transformers/utils/generic.py:441: FutureWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
_torch_pytree._register_pytree_node(
There is no output! And I'm considering whether this is required to use Fastchat. Is it necessary to hook up all three Agents to Fastchat separately? Or do I just need to use llama2-13b.
The error is caused by pytorch version compatibility, downgrade the version < 2.2
The error is caused by pytorch version compatibility, downgrade the version < 2.2
Thank you, but it's not an error. I downgrade the version of torch to 2.1. There are no more warnings. But the most important thing is that when I run the command, no output appears here.
Oh, I see. I deployed each agent to fastchat separately.
Oh, I see. I deployed each agent to fastchat separately.
Oh, can you elaborate on how you deployed each intelligence separately in fastchat? I'm having issues with ports showing up as occupied, even though I set the command line --port
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 -m fastchat.serve.model_worker
--port 31021 --worker http://localhost:31021
--host localhost
--model-names your-model-name
--model-path /model/path
--max-gpu-memory 31Gib
--dtype float16
--num-gpus 8
see in https://github.com/zjunlp/AutoAct/blob/main/Scripts/model_bash/single_model.sh
you need to change port and worker together
-
run the controller default port: 21001 python3 -m fastchat.serve.controller
-
run the open_api_server python3 -m fastchat.serve.openai_api_server --host localhost --port 8000 --controller-address "http://localhost:21001"
-
deploy each agent in https://github.com/zjunlp/AutoAct/blob/main/Scripts/model_bash/single_model.sh I set the port 21002~4 for agents CUDA_VISIBLE_DEVICES=0,1 python3 -m fastchat.serve.model_worker
--port 21005 --worker http://localhost:21002
--host localhost
--model-names plan \ # plan, action, reflect --model-path lora_path
--max-gpu-memory 31Gib
--dtype float16
--num-gpus 2
Emmm, I used the port of 31021 31022 31033 to deploy each agent separately. But there remains no output. My controller port of fastchat is 21001, and I used the command "python3 -m fastchat.serve.openai_api_server --host localhost --port 8000" forwarding port.
Oh, I carefully double-checked run_eval.py, where line 87 “if args.task == ‘Hotpotqa’:” doesn't match the input command “--task HotpotQA” character so caused the lack of output. Now I can run the run_eval.py with the output.