action-full-scan
action-full-scan copied to clipboard
GitHub - You have exceeded a secondary rate limit.
While running a scan, the workflow failed with the message:
2023-01-18T15:29:57.2895700Z Scanning process completed, starting to analyze the results!
2023-01-18T15:29:57.3263850Z [@octokit/rest] `const Octokit = require("@octokit/rest")` is deprecated. Use `const { Octokit } = require("@octokit/rest")` instead
2023-01-18T15:29:57.7910108Z ##[error]You have exceeded a secondary rate limit. Please wait a few minutes before you try again.
Is there something to be done to prevent that?
We should check that we follow these guidelines: https://docs.github.com/en/rest/guides/best-practices-for-integrators?apiVersion=2022-11-28#dealing-with-secondary-rate-limits
I got a similar result today...
Scanning process completed, starting to analyze the results!
Error: You have exceeded a secondary rate limit. Please wait a few minutes before you try again.
https://docs.github.com/free-pro-team@latest/rest/overview/rate-limits-for-the-rest-api#about-secondary-rate-limits
We also started to run into this issue. Unfortunately, re-running doesn't work as workaround, as we keep hitting this issue. Is there a recommended way to work around this limitation?
@DeviaVir I see that you linked a PR to this ticket (thanks for looking into it!). Could you please share the status of it? I.e. are you actively working on it or are there any blockers?
You could try running separate scans at different times with different smaller sets of active scan rules? that should reduce the number of requests made per hour..
You can use the forks I link in https://github.com/zaproxy/actions-common/pull/198 @alecor191, they contain the patches and don't lead to any rate limits. I'll try to keep it in sync, hopefully it gets merged soon.
I will take a look.
This secondary rate limit causes our pipeline to fail multiple times per day. We'd greatly appreciate a review of that pull request mentioned if that would fix the issue. Thanks.
The same here as well, it fails for us with the same error