[FEATURE]: --throtle CLI option
Feature summary
CLI option to reduce number of requests per minute
Feature description
Add a CLI option to pass a limit of how much requests per minute Will be done to OpenAI. This will allow both to don't get rate errors when doing the requests, and also to span the requests over more time, leading to less presure on LinkedIn and reduce risk of getting baned.
Motivation
I want to use OpenAI Free tier, i don't want or need a paid one
Alternatives considered
Implement the featured myself and do a PR.
Additional context
No response
Does changing MINIMUM_WAIT_TIME work for your use case?
How can i do that? I don't see It in the docs (is there any docs besides the README file?).
Ok, after looking in the code, if i understood correctly it's the time it's on the LinkedIn Page doing scrapping. That could help, yes, just only i don't know how much requests are being done to OpenAI for each page / job post, nor if they are done in parallel. If a single OpenAI query is being done for page and they are done sequentially, then we would not need this option since app itself would be already slower than that.
Ok, after looking in the code, if i understood correctly it's the time it's on the LinkedIn Page doing scrapping. That could help, yes, just only i don't know how much requests are being done to OpenAI for each page / job post, nor if they are done in parallel. If a single OpenAI query is being done for page and they are done sequentially, then we would not need this option since app itself would be already slower than that.
Your understanding of the code is correct. OpenAI requests are only done sequentially. However there are multiple requests per page.
Then since múltiple requests to OpenAI are being done per page, if number of requests is 3 or less, the MINIMUM_WAIT_TIME of 1 minute is fine as far as pages are scrapped sequentially too, since we would not pass the limit of 3 requests per minute to OpenAI API of the free tier. But if OpenAI requests per page are 3 or more, or pages are scrapped in parallel, then we would pass the OpenAI Free tier limit, and we would need the throtle.
Thanks for assign me the issue, what should i do now?
@piranna you can work on release branch. Fork and Fetch the new coding in release(for developer, fixed bugs and new features). After solving this problem, you can make a new Pull Request and @ me
I agree with that. I agree, the number of api requests is slightly too high
This issue has been marked as stale due to inactivity. Please comment or update if this is still relevant.
not stale