aider
aider copied to clipboard
Feature Request: Calculate Rate limit and pause before openai api close
First let me thank you for the great work! Very nice program, very useful.
I was using aider and everything was perfect, but then I got my connection close by openai because of openai.error.RateLimitError. It would be great if possible for aider, to calculate the average token used, and pause.
log:
continu implementing all components
Retry in 0.7087802705903429 seconds.
Retry in 1.8306908281905265 seconds.
Retry in 1.2346070380207106 seconds.
Retry in 2.921240035787453 seconds.
Traceback (most recent call last):
File "
Thanks for trying aider, and sorry to hear you are having troubles.
At the top of your aider output you can see it catching and retrying openai.error.RateLimitError
with exponential backoffs:
Retry in 0.7087802705903429 seconds.
Retry in 1.8306908281905265 seconds.
Retry in 1.2346070380207106 seconds.
Retry in 2.921240035787453 seconds.
Apparently 5 backoffs wasn't quite enough as the error says to retry in 6ms:
openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-x on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.
There's really no way for aider to try and "anticipate and pause" in advance of rate limit errors. But I bumped the retries to 10 in the latest version in github.
Are you or others using the OpenAI API for other things from the same org account? I would be very surprised that your solo manual use of aider is able to generate enough traffic to trigger a rate limit error like this. I have only ever encountered it when benchmarking aider with 15-20+ concurrent fully automated threads continuously for a long period of time.
Hi Paul!
I am the only user on my account. I started a new project and was asking Aider to create all files and basic code, so then i can upgrade with more complete code. I was using 16384 token for the map, maybe it did not help. Aider also told me at the beginning it was a big project and if I was sure to continue.
How do I know how much token I need for the map? Can I see how much Aider is currently using?
Thank you for the fix! Have a good evening!
I wouldn't recommend using 16k tokens for the map! Aider tries to adapt the map to whatever files you are working on. So only add the files to the chat that are relevant to the changes you are currently working on. Aider will try and build a map of the parts of the repo that are most relevant to those files.
What model are you using?
For gpt-4, it has an 8k token limit so I would recommend maybe 2k for the map at most. If you have gpt-4-32k you could go higher if you wanted to. But it's probably not providing a huge benefit in most cases beyond 3-4k.
I am using gpt-4, I thought gpt-4 was 32k token. Thank you
I'm going to close this issue for now, but feel free to re-open or file a new issue if you have any further problems.