Fetching versions list fails sometime
Hi,
I am using this action in a repo where 8 jobs starts simultaneously and in some cases It throws fetch failed error. This error doesn't occur in v2
Not sure where this could be coming from. At first I thought it could be the download APIs choking on a rate limit but fetch wouldn't throw TypeError if that were the case, that usually indicates a network error instead.
I also considered fetch itself could be the issue -- I came across a Node.js issue about it (for reference https://github.com/nodejs/node/issues/46167), introduced in node18 (v3 also upgraded node from 16 to 20) but I'm not sure the same conditions apply.
You mentioned the TypeError only happens some of the time. Is there anything that triggers it more frequently, does it happen more consistently on a specific platform, for example?
Could anyone provide a small repro to inspect the entire logs and debug it? I'm not sure how to reproduce it consistently.
@federicocarboni : I've noticed this as well since upgrading to v3; for example at, https://github.com/OpenArchitex/Caerus/actions/runs/8115243282/job/22182692666. If you need any help from my end to debug let me know. 😺
I have registered a job that runs once an hour, but since v3, it has failed once a day, every 24 times.
Has anyone ever had it happen on Windows runners?
Adding a +1 and reporting this is still occurring on v3 on the ubuntu-latest github runners.
It appears that the TypeError: fetch failed would be thrown on network errors. Perhaps the error cause could be included into the log to get more details https://github.com/federicocarboni/setup-ffmpeg/blob/main/dist/index.js#L20888
For anyone annoyed by this issue -- while I try to understand why a GitHub runner would have network errors -- pinning ffmpeg to a specific version should mitigate it, set ffmpeg-version to 6.1.0 for example.
We get "Error: AssertionError [ERR_ASSERTION]: Requested version null is not available" on mac if we lock 6.1.0
Sorry my bad, try with 6.1 as unfortunately upstream sources are not really semver
it still fails with explicit 6.1 I guess it is some issue with the server or the communication between github actions and the server.
+1 see here
Can you add a retry mechanism? That will try to download that, maybe with exponential backoff?
Or allow to specify a binary and you'll use that instead of downloading? Just do the rest of setup?
I end up creating a composite github action:
name: 'Setup FFmpeg with retries'
description: 'Installs FFmpeg with retry logic'
inputs:
github-token:
description: 'GitHub Token (required by "FedericoCarboni/setup-ffmpeg@v3")'
required: true
runs:
using: 'composite'
steps:
- name: Setup FFmpeg
id: attempt1
continue-on-error: true
uses: FedericoCarboni/setup-ffmpeg@v3
with:
github-token: ${{ inputs.github-token }}
- name: Setup FFmpeg (retry 2)
if: ${{ steps.attempt1.outcome == 'failure' }}
id: attempt2
continue-on-error: true
uses: FedericoCarboni/setup-ffmpeg@v3
with:
github-token: ${{ inputs.github-token }}
- name: Setup FFmpeg (retry 3)
if: ${{ steps.attempt2.outcome == 'failure' }}
id: attempt3
continue-on-error: true
uses: FedericoCarboni/setup-ffmpeg@v3
with:
github-token: ${{ inputs.github-token }}
This file is saved in my repo under actions/setup-ffmpeg/action.yaml.
The I changed my workflow to call
- uses: actions/checkout@v2
- name: Setup FFmpeg (with retries)
uses: ./actions/setup-ffmpeg
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
instead of the original action. Hopefully, that would do the trick.
I am reporting back to this forum that the hack with the composite GitHub action did the trick and allowed me to avoid failure. I hope that it will help someone to overcome this issue.
I am experiencing same fetch error. https://github.com/Fak3/enjam/actions/runs/10909063062/job/30276429118
Perhaps this retry trick from the comments above can be integrated right into this action? So that users don't have to create their own