reflexion icon indicating copy to clipboard operation
reflexion copied to clipboard

Reproducing Alfworld Results

Open ai-nikolai opened this issue 1 year ago • 9 comments

Hi,

Thanks for the great work. Unfortunately, we are unable to reproduce your results for ReAct / Reflexion on Alfworld.

E.g. Env0 & Env1 are successful for you, however, we always get failures on our end. (Other Envs are successful though, so it does work sometimes).

@noahshinn

ai-nikolai avatar Jan 15 '24 15:01 ai-nikolai

Hi @ai-nikolai , what model are you using?

noahshinn avatar Jan 15 '24 21:01 noahshinn

Thanks. The model used: gpt-3.5-turbo @noahshinn

ai-nikolai avatar Jan 16 '24 09:01 ai-nikolai

@noahshinn would it also be possible to upload the actual game logs for alfworld as well?

ai-nikolai avatar Jan 16 '24 17:01 ai-nikolai

The model gpt-3.5-turbo is not the same model used during the paper's time (Feb 2023). We used text-davinci-002. I'd expect that the mistakes you see result from the inferred action not matching any of the actions in the action space. We followed ReAct's implementation for AlfWorld results to stay consistent with their work.

To aid this, I would advise you to display the action space to the model to eliminate parsing errors. I can add a side implementation for this if it would be helpful for you. Also, I will dig to see if I can find the original log files from the text-davinci-002 runs.

noahshinn avatar Jan 16 '24 18:01 noahshinn

Thank you @noahshinn.

Please let us know, if there was any luck finding the original logs using text-davinci-002. This would be a really big help. Thank you.

ai-nikolai avatar Jan 31 '24 18:01 ai-nikolai

I had the same issue with got-3.5-turbo. The success rate seems much much lower. The first trial success rate for me on a subset of tasks is only around 17% which is consistent with the report from Agentbench paper. So if you could provide the original log would be really helpful

dong-river avatar Feb 25 '24 13:02 dong-river

Hi all,

A couple of comments to follow-up on this:

  1. The results you report are very hard to reproduce. (The model you used text-davinci-002 is deprecated, the two alternatives davinci-002 and gpt-3.5-turbo both have an accuracy of 0.3 on a subset, while your reported results have 0.7). Could you provide the traces, or tell us how we could produce your results.
  2. Secondly, please see attached the screenshot from AgentBench. The relevant column is HH, where you can see that only GPT-4 achieves comparable results to your ReAct results. While text-davinci-002 (which is the model your code shows, only achieves 16%, which is in-line with our reproducibility experiments).
  3. Finally, the original ReAct paper implemented the success condition using info["won"]==True, while you use done==True. This is referenced in the original alfworld repository as an issue https://github.com/alfworld/alfworld/issues/51

Concrete Actions / Questions:

  1. Please clarify how to get the results you get? (With the weaker models, or were stronger models used, or do you have traces)
  2. Please clarify if we mis-understand your results or whether they are actually 70+% or more closer to 30%?

@noahshinn @ysymyth @becklabs Screenshot 2024-03-08 at 15 02 50

ai-nikolai avatar Mar 08 '24 15:03 ai-nikolai

@noahshinn - any updates on the above?

ai-nikolai avatar Mar 20 '24 15:03 ai-nikolai

Hi @ai-nikolai, I am also trying to reproduce the results. The performance was bad in the beginning. After adding these lines to parse the action, the performance went back to normal: image

CSUN1997 avatar May 30 '24 01:05 CSUN1997