How to update code with functionality from another file
Does anyone have any good tips on prompting with aider (claude 3.5 sonnet)?
I have literally 2 files:
- App.py (700 lines of code)
- JSON drop down (200lines of code)
Both are python apps running on streamlit.
I've been trying to get aider to implement the exact functionality from file 2 into file 1.
It gets 60-70% of the code right. I spent 4 hours ans multiple prompts to do it. No luck.
I then tried to ask aider to split file 1 into multiple files to ensure that it's more efficient and robust, but it never makes it through to the end accurately.
When I try its attempt, the app is broken or not entirely the same as the single file.
I really need some help.
Thanks
Thanks for trying aider and filing this issue. This doc may be helpful:
https://aider.chat/docs/usage/tips.html
Also, when reporting problems, it is very helpful if you can provide the info below. It will help me give a more complete answer.
- Aider version
- LLM model you are using
Including the “announcement” lines that aider prints at startup is an easy way to share some of this helpful info.
Aider v0.37.1-dev
Models: gpt-4o with diff edit format, weak model gpt-3.5-turbo
Git repo: .git with 243 files
Repo-map: using 1024 tokens
@paul-gauthier thanks so much for a quick response.
I'll check this doc and provide the additional details soon
Still no luck with the tips.
Here is the additional info @paul-gauthier : Aider v0.55.0 Main model: claude-3-5-sonnet-20240620 with diff edit format, infinite output Weak model: claude-3-haiku-20240307 Git repo: .git with 4 files Repo-map: using 1024 tokens, auto refresh VSCode terminal detected, pretty output has been disabled.
Is there anything else that I can get you?
Could someone kindly provide some prompts or examples where they were able to achieve this?
I then tried to ask aider to split file 1 into multiple files to ensure that it's more efficient and robust, but it never makes it through to the end accurately.
My workflow in a situation like this:
- ask aider to create tests for your big file implementation
- check if the tests actually test the implementation (LLMs love to create tests which only check mocked functions inside the test script)
- ask aider to split up the big file, piece by piece (e.g. by function or bigger logic parts)
- run the tests to check if everything still works
I'm going to close this issue for now, but feel free to add a comment here and I will re-open. Or feel free to file a new issue any time.