reflexion
reflexion copied to clipboard
Reproducing Alfworld Results
Hi,
Thanks for the great work. Unfortunately, we are unable to reproduce your results for ReAct / Reflexion on Alfworld.
E.g. Env0 & Env1 are successful for you, however, we always get failures on our end. (Other Envs are successful though, so it does work sometimes).
@noahshinn
Hi @ai-nikolai , what model are you using?
Thanks. The model used: gpt-3.5-turbo
@noahshinn
@noahshinn would it also be possible to upload the actual game logs for alfworld as well?
The model gpt-3.5-turbo
is not the same model used during the paper's time (Feb 2023). We used text-davinci-002
. I'd expect that the mistakes you see result from the inferred action not matching any of the actions in the action space. We followed ReAct's implementation for AlfWorld results to stay consistent with their work.
To aid this, I would advise you to display the action space to the model to eliminate parsing errors. I can add a side implementation for this if it would be helpful for you. Also, I will dig to see if I can find the original log files from the text-davinci-002
runs.
Thank you @noahshinn.
Please let us know, if there was any luck finding the original logs using text-davinci-002
. This would be a really big help. Thank you.
I had the same issue with got-3.5-turbo. The success rate seems much much lower. The first trial success rate for me on a subset of tasks is only around 17% which is consistent with the report from Agentbench paper. So if you could provide the original log would be really helpful
Hi all,
A couple of comments to follow-up on this:
- The results you report are very hard to reproduce. (The model you used
text-davinci-002
is deprecated, the two alternativesdavinci-002
andgpt-3.5-turbo
both have an accuracy of 0.3 on a subset, while your reported results have 0.7). Could you provide the traces, or tell us how we could produce your results. - Secondly, please see attached the screenshot from AgentBench. The relevant column is HH, where you can see that only GPT-4 achieves comparable results to your ReAct results. While
text-davinci-002
(which is the model your code shows, only achieves 16%, which is in-line with our reproducibility experiments). - Finally, the original ReAct paper implemented the success condition using
info["won"]==True
, while you usedone==True
. This is referenced in the original alfworld repository as an issue https://github.com/alfworld/alfworld/issues/51
Concrete Actions / Questions:
- Please clarify how to get the results you get? (With the weaker models, or were stronger models used, or do you have traces)
- Please clarify if we mis-understand your results or whether they are actually 70+% or more closer to 30%?
@noahshinn @ysymyth @becklabs
@noahshinn - any updates on the above?
Hi @ai-nikolai,
I am also trying to reproduce the results. The performance was bad in the beginning. After adding these lines to parse the action, the performance went back to normal: