MetaGPT
MetaGPT copied to clipboard
Discussion - Incremental Development Mode
I'm keen on crafting an incremental development solution and would appreciate insights on the current status of the 'incremental' implementation in MetaGPT. Is anyone working on this right currently? What led to its abandonment, and what hurdles were encountered along the way? Do you know alternative incremental development approaches and open-source projects that might shed light on best practices. @gnovelli proposed to implement the Agile workflow in this GitHub thread (https://github.com/geekan/MetaGPT/issues/149#issuecomment-1675847202)
Well, work on incremental development is still ongoing, and we are now more focused on making breakthroughs in this area than ever before. Currently, our main challenge is the lack of a robust testing suite to assess our improvements, so efforts are currently directed towards addressing this issue. We believe that resolving this will accelerate progress in this field moving forward.
Have you checked Aider? https://aider.chat/
Any update on this?
GPT Pilot is also very interesting.
@kripper We have more experiments and effect updates. But frankly speaking, this is a relatively complex scientific research problem. After the next few weeks of experimentation, we will standardize the results related to incremental development and make the related scores public
But frankly speaking, this is a relatively complex scientific research problem.
Please elaborate what are the current main issues you are struggling with and I will provide you with solutions. I'm an old self confident problem solver.
@kripper You can look into SWE-bench if you are interested. If you want to participate in performance optimization, I personally welcome it
Ok. Do you have a test case? Or even better, a conversation log showing where the model/framework is failing? I'm very good at decomposing prompts to make them work.
I wrote an open AI API end point proxy to log all prompts/responses to 1) analyze what similar opensource projects are doing under the hood and 2) rewrite the prompts to make them work (without the need to modify the source code).
I've been using this approach to enable Gemini Pro support for all these projects which are only working with GPT4.
@kripper I mean: https://github.com/princeton-nlp/SWE-bench this dataset. We are testing and optimizing it
@geekan see https://youtu.be/TgB6JO6gup0?si=VJ9HKb07v-xcaihL
I go through the code and find the way you apply new change is incorrect for other languages.
I did tried setup the unit test project in repo, it seems working in python. But I use command to do the increment change for Dart which used for flutter is not working.
The prompt in metagpt/actions/write_code_plan_and_change_an.py it only fit for Python.
I would not suggest using git apply diff to change the files, as the file get big that command won't work. It's better to overwrite code and generate diff later and save into change.json.
New SOTA: https://youtu.be/J2bFf83t1b8?si=GPI1iOgZCBLMEHlf
Hello, I see that you have introduced an incremental feature but I don't understand if it works or not.
I made two experiments:
Experiment A
- metagpt --project-name python_file_functions "Write a program that has functions stored in one or more files, such as utils.py, and prints tests as needed from the main main.py file. Start with factorial and fibonacci"
- metagpt --inc --project-name python_file_functions "Add natural numbers sequence" metagpt produces CONTENT for code plan and change but doesn't integrate it in a new version of the whole code:
[CONTENT]
{
"Code Plan And Change": "\n1. Plan for main.py: Integrate the sequence of natural numbers into the existing codebase of `main.py`. Ensure seamless integration with the overall application architecture and maintain consistency with coding standards.\n```python\nfrom utils import Utils\n\nclass Main:\n def main(self):\n utils = Utils()\n result_factorial = utils.factorial(5)\n result_fibonacci = utils.fibonacci(7)\n result_natural_numbers = utils.natural_numbers_sequence(10)\n print(f\"Factorial of 5: {result_factorial}\")\n print(f\"Fibonacci of 7: {result_fibonacci}\")\n print(f\"Natural numbers sequence up to 10: {result_natural_numbers}\")\n\nif __name__ == \"__main__\":\n main_obj = Main()\n main_obj.main()\n```"
} [/CONTENT] The code directory remains unchanged.
Experiment B:
- metagpt --project-name fact "Write a Python functions that computes factorial" I get a main.py with factorial function
- metagpt --inc --project-name fact "Make factorial implementation iterative and remove references to math.factorial" I get a modified factorial implementation
- metagpt --inc --project-name fact "Add fibonacci function" I get in code plan and change two files one for each function but the code directory becomes empy
How should it work?
It seems, in my case, to note the differences, but doesn't apply them to the files https://github.com/sfxworks/log_processor/commit/406e31b9c30b3e3619512a83161ecd55ee0d79fe
I suppose I could write something that runs the difs generated