AgentTuning
AgentTuning copied to clipboard
AgentTuning 7b evaluate in HH, not expect as paper result
https://huggingface.co/THUDM/agentlm-7b , I try it,but far below 84% in alfworld-std. Is it the wrong model?
{ "total": 50, "validation": { "running": 0.0, "completed": 0.1, "agent context limit": 0.0, "agent validation failed": 0.0, "agent invalid action": 0.62, "task limit reached": 0.28, "unknown": 0.0, "task error": 0.0, "average_history_length": 62.22, "max_history_length": 91, "min_history_length": 20 }, "custom": { "overall": { "total": 50, "pass": 5, "wrong": 45, "success_rate": 0.1 } } }
Your output seems like there may be a mismatch in the evaluation setup you've used. Please ensure that you're using the evaluation code from ./AgentBench.old as mentioned in README, not the latest repo THUDM/AgentBench. Could you kindly provide your trajectories for a thorough review?
Yes, when I use the latest version of them, where do I send the trajectory information?
But I can get to 0.84 with gpt-4
{ "total": 50, "validation": { "running": 0.0, "completed": 0.84, "agent context limit": 0.0, "agent validation failed": 0.0, "agent invalid action": 0.04, "task limit reached": 0.12, "unknown": 0.0, "task error": 0.0, "average_history_length": 50.56, "max_history_length": 91, "min_history_length": 21 }, "custom": { "overall": { "total": 50, "pass": 42, "wrong": 8, "success_rate": 0.84 } } }
here is my trajectories for a thorough review in HH.
链接:https://pan.baidu.com/s/1Np291cysxDQDozzr4RiJDQ?pwd=1ijk
提取码:1ijk
As mentioned in https://github.com/THUDM/AgentTuning#held-in-tasks
The 6 held-in tasks are selected from AgentBench. However, since AgentBench is still under active development, the results from the latest branch might not fully reproduce the results reported in the paper. The evaluation code of this project is located in ./AgentBench.old.
Please use the AgentBench.old directory at AgentBench.old for Agent task evaluation.
But it's just a lot below the latest Agentbench test. a bit unexpected. Make sure that the uploaded model is okay.
How much epoch have you trained?
How much epoch have you trained?
The models are trained for 2k steps, batch size 64, sequence length 4096 with packing.
I use fastchat to fine tune llama2, but the effect was not very ideal. Can you use fastchat to achieve the effect of the paper after fine tuning? Although the batch size I set is not very large at 2, the improvement in completing tasks after fine-tuning is not significant. Do you have any good suggestions? In addition, chatglm3-6B can reach 64% in HH tasks, which also proves the effectiveness of AgentTuning.
in addtion , one of AgentInstruct data is invalid : { "conversations": [ { "from": "human", "loss": false, "value": "'''\n
Since I achieved poor results after fine-tuning with FastChat, I intend to further improve its capabilities by increasing the dataset size. The approach of expanding the dataset size by using the training data from the AlfWorld dataset , and then evaluating it. Can this approach be effective? Could you provide some advice?
Is alfworld's prompt "alfworld_multiturn_new.json" better than "alfworld_multiturn_react.json"?
@Dhaizei 哈喽,我最近也在看alfworld,我发现直接用agent LM 7b 在原版alfworld上测https://github.com/alfworld/alfworld, 134个eval 的task只成功了一个,这个结果我看和paper里面的相去甚远。我看你之前也问过作者们类似的问题,想问一下你复现的结果是多少?十分感谢
可能需要看一下提示词,适配一下,目前的话,好久没有测了,估计用glm或者qwen的效果会比之前他们公布的要好很多。很多模型都具备了思考和规划能力,并且很强,比如deepseekR1
@Dhaizei 我都测了 从qwen2.5 7b instruct到R1的7b distill version,原版alfworld上134个eval task全部fail(agent LM 7b好歹还做对了一个hh)。感谢感谢,我去看看prompt
之前我看过交互的过程,大多数错的都是很显而易见的思考过程,试过r1和qwen2.5之后,我感觉这几个task应该可以轻松拿下😂,看看具体是什么错误,可以把交互的过程记录下来,是模型本身问题还是提示词不当的原因