ChatDev icon indicating copy to clipboard operation
ChatDev copied to clipboard

Benchmark on SWE-Bench

Open distbit0 opened this issue 1 year ago • 1 comments

It would be interesting to see the performance on SWE-Bench benchmarks, so that this project can be more clearly differentiated from the increasing number of other coding agents.

  • https://www.swebench.com/

  • https://github.com/princeton-nlp/SWE-bench

  • [ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?

  • https://arxiv.org/abs/2310.06770

  • SWE-bench: Can Language Models Resolve Real-World GitHub Issues?

Carlos E. Jimenez, John Yang, Alexander Wettig, Shunyu Yao, Kexin Pei, Ofir Press, Karthik Narasimhan Language models have outpaced our ability to evaluate them effectively, but for their future development it is essential to study the frontier of their capabilities. We consider real-world software engineering to be a rich, sustainable, and challenging testbed for evaluating the next generation of language models. We therefore introduce SWE-bench, an evaluation framework including 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase along with a description of an issue to be resolved, a language model is tasked with editing the codebase to address the issue. Resolving issues in SWE-bench frequently requires understanding and coordinating changes across multiple functions, classes, and even files simultaneously, calling for models to interact with execution environments, process extremely long contexts and perform complex reasoning that goes far beyond traditional code generation. Our evaluations show that both state-of-the-art proprietary models and our fine-tuned model SWE-Llama can resolve only the simplest issues. Claude 2 and GPT-4 solve a mere 4.8% and 1.7% of instances respectively, even when provided with an oracle retriever. Advances on SWE-bench represent steps towards LMs that are more practical, intelligent, and autonomous.

distbit0 avatar Apr 10 '24 10:04 distbit0

Thank you very much for your suggestions and we will continue to improve chatdev in the future

JYM0000 avatar Apr 12 '24 07:04 JYM0000

Thank you for your suggestion! We have other researches ongoing which are related to real-life issues and solutions. Right now ChatDev serves as a software-level solution and we may not test it on the SWE-Bench in the short term.

thinkwee avatar May 07 '24 07:05 thinkwee