EnigmaYYY

Results 8 comments of EnigmaYYY

Thank you. But after I changed the version of Langchain to 0.0.299, a new question about openai appeared: /data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num...

Oh, I notice that there is no need for openai in the requirement file. But when I removed the openai module, there still have a question about openai: /data/AutoAct# python...

Thank you, but there are some new issues: /data/AutoAct# python Self_Instruct/data_generation.py --source_data Self_Instruct/Meta_sample/Meta_Hotpotqa.json --target_data Self_Instruct/hotpotqa_metaqa.json --dataset_name hotpotqa --generate_all_num 800 --generate_per_round_num 10 --model_name llama-2-13b-chat have generated num 2, all 800 need...

Thank you for help! I've progressed to the Group Planning stage and want to use the trained Agent Groups for benchmarking. But there are some issues: python Self_Plan/Group_Planning/run_eval.py \ --agent_name...

> The error is caused by pytorch version compatibility, downgrade the version < 2.2 Thank you, but it's not an error. I downgrade the version of torch to 2.1. There...

> Oh, I see. I deployed each agent to fastchat separately. Oh, can you elaborate on how you deployed each intelligence separately in fastchat? I'm having issues with ports showing...

Emmm, I used the port of 31021 31022 31033 to deploy each agent separately. But there remains no output. My controller port of fastchat is 21001, and I used the...

Oh, I carefully double-checked run_eval.py, where line 87 “if args.task == ‘Hotpotqa’:” doesn't match the input command “--task HotpotQA” character so caused the lack of output. Now I can run...