Jobs_Applier_AI_Agent_AIHawk icon indicating copy to clipboard operation
Jobs_Applier_AI_Agent_AIHawk copied to clipboard

[FEATURE]: --throtle CLI option

Open piranna opened this issue 1 year ago • 10 comments

Feature summary

CLI option to reduce number of requests per minute

Feature description

Add a CLI option to pass a limit of how much requests per minute Will be done to OpenAI. This will allow both to don't get rate errors when doing the requests, and also to span the requests over more time, leading to less presure on LinkedIn and reduce risk of getting baned.

Motivation

I want to use OpenAI Free tier, i don't want or need a paid one

Alternatives considered

Implement the featured myself and do a PR.

Additional context

No response

piranna avatar Nov 01 '24 19:11 piranna

Does changing MINIMUM_WAIT_TIME work for your use case?

FrancescoVassalli avatar Nov 01 '24 22:11 FrancescoVassalli

How can i do that? I don't see It in the docs (is there any docs besides the README file?).

piranna avatar Nov 01 '24 23:11 piranna

Ok, after looking in the code, if i understood correctly it's the time it's on the LinkedIn Page doing scrapping. That could help, yes, just only i don't know how much requests are being done to OpenAI for each page / job post, nor if they are done in parallel. If a single OpenAI query is being done for page and they are done sequentially, then we would not need this option since app itself would be already slower than that.

piranna avatar Nov 02 '24 00:11 piranna

Ok, after looking in the code, if i understood correctly it's the time it's on the LinkedIn Page doing scrapping. That could help, yes, just only i don't know how much requests are being done to OpenAI for each page / job post, nor if they are done in parallel. If a single OpenAI query is being done for page and they are done sequentially, then we would not need this option since app itself would be already slower than that.

Your understanding of the code is correct. OpenAI requests are only done sequentially. However there are multiple requests per page.

FrancescoVassalli avatar Nov 03 '24 00:11 FrancescoVassalli

Then since múltiple requests to OpenAI are being done per page, if number of requests is 3 or less, the MINIMUM_WAIT_TIME of 1 minute is fine as far as pages are scrapped sequentially too, since we would not pass the limit of 3 requests per minute to OpenAI API of the free tier. But if OpenAI requests per page are 3 or more, or pages are scrapped in parallel, then we would pass the OpenAI Free tier limit, and we would need the throtle.

piranna avatar Nov 03 '24 09:11 piranna

Thanks for assign me the issue, what should i do now?

piranna avatar Nov 07 '24 05:11 piranna

@piranna you can work on release branch. Fork and Fetch the new coding in release(for developer, fixed bugs and new features). After solving this problem, you can make a new Pull Request and @ me

cjbbb avatar Nov 07 '24 06:11 cjbbb

I agree with that. I agree, the number of api requests is slightly too high

cjbbb avatar Nov 07 '24 06:11 cjbbb

This issue has been marked as stale due to inactivity. Please comment or update if this is still relevant.

github-actions[bot] avatar Jan 10 '25 02:01 github-actions[bot]

not stale

piranna avatar Jan 12 '25 01:01 piranna