codecov-action icon indicating copy to clipboard operation
codecov-action copied to clipboard

Retry if upload fails

Open alexandrnikitin opened this issue 2 years ago • 34 comments

Hi, Time to time we get 503 errors while uploading the data. The log looks like this:

...
[2023-02-24T17:38:21.359Z] ['verbose'] tag
[2023-02-24T17:38:21.359Z] ['verbose'] flags
[2023-02-24T17:38:21.359Z] ['verbose'] parent
[2023-02-24T17:38:21.360Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-2.1.0-uploader-0.3.5&token=*******....
[2023-02-24T17:38:21.360Z] ['verbose'] Passed token was 36 characters long
[2023-02-24T17:38:21.360Z] ['verbose'] https://codecov.io/upload/v4?package=github-action-2.1.0-uploader-0.3.5&...
        Content-Type: 'text/plain'
        Content-Encoding: 'gzip'
        X-Reduced-Redundancy: 'false'
[2023-02-24T17:38:23.332Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 503 - upstream connect error or disconnect/reset before headers. reset reason: connection failure
[2023-02-24T17:38:23.332Z] ['verbose'] The error stack is: Error: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 503 - upstream connect error or disconnect/reset before headers. reset reason: connection failure
    at main (/snapshot/repo/dist/src/index.js)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-02-24T17:38:23.332Z] ['verbose'] End of uploader: 3001 milliseconds

It would be great to have a retry mechanism with some defined timeout.

alexandrnikitin avatar Feb 24 '23 18:02 alexandrnikitin

Hi, there. I strongly agree with @alexandrnikitin. It kills time because I have to retry the whole job if the codecov action fails.

[2023-03-09T18:01:33.255Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
[2023-03-09T18:01:33.256Z] ['verbose'] The error stack is: Error: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}
    at main (/snapshot/repo/dist/src/index.js)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
[2023-03-09T18:01:33.256Z] ['verbose'] End of uploader: 1829 milliseconds

LucasXu0 avatar Mar 10 '23 02:03 LucasXu0

This would be very helpful.

We fixed the initial problem "Unable to locate build via Github Actions API." using some of the suggestions in the several different dscussions.

It has been running OK for few weeks now but now we started to see different errors, such as:

[2023-03-13T18:04:08.821Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.5&token=*******&branch=fix%2F10518&build=4407915657&build_url=https%3A%2F%2Fgithub.com%2Fdecidim%2Fdecidim%2Factions%2Fruns%2F4407915657&commit=538d19c980fa26abebbdb736c28488a81c69ac8a&job=%5BCI%5D+Meetings+%28unit+tests%29&pr=10519&service=github-actions&slug=decidim%2Fdecidim&name=decidim-meetings&tag=&flags=decidim-meetings&parent=
[2023-03-13T18:04:27.515Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 500 - {"error": "Server Error (500)"}

And

[2023-03-13T18:16:44.977Z] ['info'] Pinging Codecov: https://codecov.io/upload/v4?package=github-action-3.1.1-uploader-0.3.5&token=*******&branch=fix%2F10518&build=4407915631&build_url=https%3A%2F%2Fgithub.com%2Fdecidim%2Fdecidim%2Factions%2Fruns%2F4407915631&commit=538d19c980fa26abebbdb736c28488a81c69ac8a&job=%5BCI%5D+Meetings+%28system+public%29&pr=10519&service=github-actions&slug=decidim%2Fdecidim&name=decidim-meetings-system-public&tag=&flags=decidim-meetings-system-public&parent=
[2023-03-13T18:17:15.139Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) HeadersTimeoutError: Headers Timeout Error

It would be really helpful if the codecov action waited few seconds and retried so that we don't have to rerun the whole action which can take up to 30 mins (depending on the workflow).

ahukkanen avatar Mar 13 '23 19:03 ahukkanen

+1

pavolloffay avatar Mar 16 '23 13:03 pavolloffay

Yes please, we've this issue fairly constantly and it's incredibly annoying.

Licenser avatar Mar 30 '23 12:03 Licenser

To put this in perspective, all those PR's failed because the codecov upload failed. This means we got to re-run the jobs and pay for minutes again :( this issue is really painful

grafik

Licenser avatar Mar 31 '23 07:03 Licenser

We have been experiencing a lot of similar issues as described above. The amount of jobs that fail is really getting annoying, to the state that reviewers aren't even bothering with restarting the CI.

We've limited runtime for the codecov jobs to prevent them from running for hours and exhausting our CI runners. On non-open source projects, this can be quite costly when GitHub bills the org.

Anything we can provide to resolve this issue?

../Frenck

frenck avatar Mar 31 '23 13:03 frenck

Just had a similar issue, this time with error code 502. https://github.com/home-assistant/core/actions/runs/4618964416/jobs/8167147703

[2023-04-05T13:27:00.542Z] ['error'] There was an error running the uploader: Error uploading to https://codecov.io: Error: There was an error fetching the storage URL during POST: 502 - 
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>

epenet avatar Apr 05 '23 13:04 epenet

We're also seeing the above mentioned 502s.

What we ended up doing is using a retry mechanism like https://github.com/Wandalen/wretry.action to retry the upload. fail_ci_if_error set to false is not an option really if someone cares about the coverage reports.

pmalek avatar Apr 06 '23 08:04 pmalek

Facing the same issue. Would love a retry ❤️

Stael avatar Apr 06 '23 16:04 Stael

A retry would be awesome.

Here is another failure on GitHub Actions - https://github.com/twisted/twisted/actions/runs/4780033926/jobs/8497499137?pr=11845#step:13:42

[2023-04-23T19:28:07.607Z] ['error'] There was an error running the uploader:
Error uploading to [https://codecov.io:](https://codecov.io/) Error: getaddrinfo EAI_AGAIN codecov.io

adiroiban avatar Apr 23 '23 19:04 adiroiban

This will be such a game changer in CI experience.

If CI is periodically fails because of reasons like network error, people tend to ignore other failures.

citizen-stig avatar Jun 08 '23 08:06 citizen-stig

We've had the following (same problem as @LucasXu0 reported above) which prevented the upload from working.

[2023-06-05T13:59:02.657Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}

This looks to have been caused by a temporary GitHub API outage, but because we don't have fail_ci_if_error enabled the coverage on our main branch became incorrect as only a portion of the required coverage data was uploaded.

I would suggest a new optional argument for codecov-action allowing a given number of retries and an inter-retry cooldown to be specified.

As a workaround, instead of performing the coverage upload as part of the same job as the build & test this can be split out into a separate job. The upload-artifact action could be used to store the raw coverage data as an artifact which a later codecov job would retrieve and upload. If the codecov upload failed then all that would need to be rerun is the failed codecov job. This job would be just the upload, so it would avoid rerunning any build/test, saving many GitHub runner minutes.

GCHQDeveloper314 avatar Jun 08 '23 13:06 GCHQDeveloper314

We have exactly the same behavior in https://github.com/equinor/ert and an option to retry on connection failures would be awesome.

eivindjahren avatar Jun 15 '23 13:06 eivindjahren

What we ended up doing is using a retry mechanism like https://github.com/Wandalen/wretry.action to retry the upload. fail_ci_if_error set to false is not an option really if someone cares about the coverage reports.

Using this retry action in my project significantly reduces the failure count. It serves as a workaround for the time being.

LucasXu0 avatar Jun 16 '23 05:06 LucasXu0

@GCHQDeveloper314 Would you mind sharing the workflow with your solution. I've been able to use https://github.com/actions/upload-artifact in one job that generates a coverage .xml file and https://github.com/actions/download-artifact in another job to retrieve the file and upload it to Codecov, but Codecov says it's an "unusable report".

imnasnainaec avatar Sep 15 '23 19:09 imnasnainaec

I figured it out. I needed to check out the repository before uploading the coverage report:

jobs:
  test_coverage:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18]
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
      - name: Use Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v3
        with:
          node-version: ${{ matrix.node-version }}
      - run: npm ci
      - run: npm run test-frontend:coverage
        env:
          CI: true
      - name: Upload coverage artifact
        uses: actions/upload-artifact@v3
        with:
          if-no-files-found: error
          name: coverage
          path: coverage/clover.xml
          retention-days: 7

  upload_coverage:
    needs: test_coverage
    runs-on: ubuntu-latest
    steps:
      - name: Checkout repository
        uses: actions/checkout@v4
      - name: Download coverage artifact
        uses: actions/download-artifact@v3
        with:
          name: coverage
      - name: Upload coverage report
        uses: codecov/codecov-action@v3
        with:
          fail_ci_if_error: true
          files: clover.xml
          flags: frontend
          name: Frontend

imnasnainaec avatar Sep 15 '23 21:09 imnasnainaec

@imnasnainaec At the time of suggesting that workaround in here I hadn't implemented it yet. When I did this, I also found that a repo checkout is required by codecov. I believe this is because it needs the git history. My solution can be seen here - it is very similar to what you've posted above. With this approach, if codecov upload fails then only a single step (which takes under 1 minute) needs to be rerun to retry the upload - no need to rerun any tests which saves us many minutes.

GCHQDeveloper314 avatar Sep 18 '23 07:09 GCHQDeveloper314

This issue occurs because the GitHub api takes a little bit to update with new events. I have an action that relies on the GitHub events API, where I experienced a similar situation. Adding a simple 10 second wait, with a retry loop solved it. In most cases it works on the first attempt. Just for reference (https://github.com/LizardByte/setup-release-action/pull/40)

Please add retry logic to this action.

[2023-12-11T15:16:32.395Z] ['error'] There was an error running the uploader: Error uploading to [https://codecov.io:](https://codecov.io/) Error: There was an error fetching the storage URL during POST: 404 - {'detail': ErrorDetail(string='Unable to locate build via Github Actions API. Please upload with the Codecov repository upload token to resolve issue.', code='not_found')}

Tokens/secrets are not an option for open source projects that accept PRs from forks.

Re-running the action also does not solve it. I have one workflow I have ran 7 times, and it has failed to upload every time... likely because my tests complete very quickly, and there's almost no possibility for it to be in the GitHub actions API at that point.

ReenigneArcher avatar Dec 11 '23 15:12 ReenigneArcher

It seems that the need for a exponential backoff automatic retry is more urgent these days. I seen

The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.

This is as clear as it sounds and implementing retry in python is not really hard.

ssbarnea avatar Feb 26 '24 17:02 ssbarnea

+1 - wastes a lot of time if upload fails, and the entire action must be re-run (usually involves running all unit-tests)

tomage avatar Apr 16 '24 20:04 tomage

I have attempted to add a 30 second sleep and retry and it simply isn't enough. If a retry is to be added, it needs to be more than that to work consistently.

eivindjahren avatar Apr 17 '24 05:04 eivindjahren

I have attempted to add a 30 second sleep and retry and it simply isn't enough. If a retry is to be added, it needs to be more than that to work consistently.

In v4 you get a more detailed error message. But basically tokenless uploads are failing more often due to GitHub api limits.

error - 2024-04-16 13:51:14,366 -- Commit creating failed: {"detail":"Tokenless has reached GitHub rate limit. Please upload using a token: https://docs.codecov.com/docs/adding-the-codecov-token. Expected available in 459 seconds."}

It seems like the action uses a central codecov owned GitHub API token. That is likely because the built in GITHUB_TOKEN doesn't have access to the events scope https://docs.github.com/en/actions/security-guides/automatic-token-authentication#permissions-for-the-github_token and using a GitHub app (https://docs.github.com/en/actions/security-guides/automatic-token-authentication#granting-additional-permissions) still wouldn't work for fork PRs as far as I understand.

In any event the information required for a reliable retry is already available in the logs. In my example waiting ~8 minutes is better than rebuilding the project from zero which can sometimes take ~30 minutes. Even just avoiding the manual "re-run" click would be worth it.

ReenigneArcher avatar Apr 17 '24 13:04 ReenigneArcher

But basically tokenless uploads are failing more often due to GitHub api limits.

Given that the latest version requires a token, this is not the issue that most people are reporting here, and possibly not worth the extra work to extract the retry time from the message. The primary issue is with Codecov's servers themselves, which occasionally fail to accept an upload. As shown above (https://github.com/codecov/codecov-action/issues/926#issuecomment-1964679977) this usually suggests retrying after around 30 seconds. This issue is just asking Codecov to follow the advice from their own server.

Dreamsorcerer avatar Apr 17 '24 18:04 Dreamsorcerer

So I believe this behavior recently changed a bit. You now get the following if you use forks:

info - 2024-05-27 07:27:20,004 -- ci service found: github-actions
info - 2024-05-27 07:27:20,294 -- The PR is happening in a forked repo. Using tokenless upload.
info - 2024-05-27 07:27:20,478 -- Process Commit creating complete
error - 2024-05-27 07:27:20,479 -- Commit creating failed: {"detail":"Tokenless has reached GitHub rate limit. Please upload using a token: https://docs.codecov.com/docs/adding-the-codecov-token. Expected available in 393 seconds."}

Note that the link is broken, and should really point to this: https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov . Turns out that codecov has a shared pool of github resources that gets rate limited. So I would suggest that if you have retry logic implemented, please be considerate about using those shared resources. Also, if codecov could give some guidance in how to avoid using tokenless upload in case of forked repo workflow then that would be great.

https://docs.codecov.com/docs/codecov-uploader#supporting-token-less-uploads-for-forks-of-open-source-repos-using-codecov

eivindjahren avatar May 27 '24 10:05 eivindjahren

Since they provide the expected time now (Expected available in 393 seconds.), maybe they could handle the retry logic using the time provided in their failure response.

Maybe they could allow us to use a public upload token for use in PRs, which would only have permission to add coverage information for repos/branches which aren't the origin one.

ReenigneArcher avatar May 27 '24 13:05 ReenigneArcher

@ReenigneArcher thanks for your message. Initially, we tried to do retries after the expected time. However, since this is a blocking call, CI runs could potentially run for hours if they missed the window to upload.

That said, we are making changes to our system to decrease the number of GitHub API calls which will hopefully alleviate some of this pain.

Also, I am looking into adding retries as a feature to the Action. However, this may be slated for later next quarter.

thomasrockhu-codecov avatar Jun 18 '24 16:06 thomasrockhu-codecov

However, since this is a blocking call, CI runs could potentially run for hours if they missed the window to upload.

That's a fair concern, but in most cases (all that I've seen?), the retry can happen after 30 seconds or so. While restarting the CI process (for many of us) takes more like 15+ mins and requires us manually rerunning it (versus it happening automatically without supervision).

The retry logic could be opt-in for those concerned that it might use too many minutes (though it should obviously also be capped at a sensible or configurable time limit).

Dreamsorcerer avatar Jun 18 '24 18:06 Dreamsorcerer

@Dreamsorcerer Yea, come to think of it. Something extremely wasteful seems to be happening. I just realized that triggering an upload of coverage data shouldn't consume anything from the github api! Is it fetching the commit every time a coverage data upload is happening? We upload 4 reports for each PR so 3 of those uploads should not need to interact with github.

It seems like you could have the github action upload whatever information you need to track and then go fetch what you need from github when it is requested as interacting with coverage data happens far less frequently than coverage report uploads.

eivindjahren avatar Jun 19 '24 06:06 eivindjahren

Also, if codecov is trying to use the events API, commits may not even appear there for up to 6 hours. I discovered that in another project of mine where I was using the events API.

image

https://docs.github.com/en/rest/activity/events?apiVersion=2022-11-28#list-repository-events

ReenigneArcher avatar Jun 19 '24 19:06 ReenigneArcher