Azure CI jobs sometimes fail with "fatal: couldn't find remote ref"
In the last weeks, I have noticed a few cases in which some PR jobs start failing with errors like:
==============================================================================
Syncing repository: conda-forge/idyntree-feedstock (GitHub)
git version
git version 2.47.0
git lfs version
git-lfs/3.5.1 (GitHub; linux amd64; go 1.21.8)
git init "/home/vsts/work/1/s"
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /home/vsts/work/1/s/.git/
git remote add origin https://github.com/conda-forge/idyntree-feedstock
git config gc.auto 0
git config core.longpaths true
git config --get-all http.https://github.com/conda-forge/idyntree-feedstock.extraheader
git config --get-all http.extraheader
git config --get-regexp .*extraheader
git config --get-all http.proxy
git config http.version HTTP/1.1
git --config-env=http.extraheader=env_var_http.extraheader fetch --force --tags --prune --prune-tags --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/112/merge:refs/remotes/pull/112/merge
fatal: couldn't find remote ref refs/pull/112/merge
##[warning]Git fetch failed with exit code 128, back off 2.266 seconds before retry.
git --config-env=http.extraheader=env_var_http.extraheader fetch --force --tags --prune --prune-tags --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/112/merge:refs/remotes/pull/112/merge
fatal: couldn't find remote ref refs/pull/112/merge
##[warning]Git fetch failed with exit code 128, back off 8.175 seconds before retry.
git --config-env=http.extraheader=env_var_http.extraheader fetch --force --tags --prune --prune-tags --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/pull/112/merge:refs/remotes/pull/112/merge
fatal: couldn't find remote ref refs/pull/112/merge
##[error]Git fetch failed with exit code: 128
Finishing: Checkout conda-forge/idyntree-feedstock@refs/pull/112/merge to s
See https://github.com/conda-forge/idyntree-feedstock/pull/112 . In most cases to solve the problem restarting the CI was not effective, while making the bot open a new PR was effective. I am not sure what is causing this or if this is the right place to report the issue, but I prefer to have at least an open issue on this so that I have someting to cross-link whenever this happens.
In most cases to solve the problem restarting the CI was not effective, while making the bot open a new PR was effective.
Also pushing new commits seems effective to fix the problem.
See also:
- https://matrix.to/#/!SOyumkgPRWoXfQYIFH:matrix.org/$17295915824STUby:gitter.im?via=matrix.org&via=gitter.im&via=ifisc.uib-csic.es
- https://github.com/conda-forge/qdax-feedstock/pull/8
- https://github.com/conda-forge/torchaudio-feedstock/pull/4
fyi @tobias-fischer
please see https://github.com/conda-forge/status/issues/188
I am guessing this is a duplicate of that one.
please see conda-forge/status#188
I am guessing this is a duplicate of that one.
I am not sure. This problem predates that one (as you can see in https://github.com/conda-forge/qdax-feedstock/pull/8), and also the high level error is different: in one case the Azure CI jobs did not start, in this one it starts and it fails after a few seconds. However they may definitely be connected. If this does not occur anymore in a few days, I think we can close it.
Can you try an empty commit? Close-reopen has been insufficient on all my tries, but I saw a staged-recipes PR clear up after a merge from main. I'm still waiting to see if my bot-reruns resolve it for some bot version bump PRs.
@danielnachun and I were also seeing this since at least Friday.
Can you try an empty commit? Close-reopen has been insufficient on all my tries, but I saw a staged-recipes PR clear up after a merge from main. I'm still waiting to see if my bot-reruns resolve it for some bot version bump PRs.
I tried a non-empty commit, and it fixed the problem.
I’ve tried non empty commits (rerender for example) and that didn’t resolve the issue in my case
I’ve tried non empty commits (rerender for example) and that didn’t resolve the issue in my case
I vaguely recalled that, but I had no reference so I was not sure. In my case, the non-empty commit was done by me today (not a bot) and it worked. Not sure if the difference was the committer or simply that the problem today is different from a few days ago.
Another instance: https://github.com/conda-forge/smirnoff-plugins-feedstock/pull/11 .
Yeah I was running into this earlier today, but an empty commit seemed to do this trick when re-opening the PR didn't. I didn't look into it any more closely than that.
Getting this again today. No clue why.
Empty commit seems to work again. Still confusing and a little frustrating
Are we still seeing this issue? If so, can you please share links to any recent examples?
Thanks in advance! 🙏
Are we still seeing this issue? If so, can you please share links to any recent examples?
Thanks in advance! 🙏
I did not saw one in a long time, but today it happened again: https://github.com/conda-forge/libode-feedstock/pull/27 .
Pushing a new commit, now the CI fails with:
+ docker pull quay.io/condaforge/linux-anvil-x86_64:alma9
Error response from daemon: received unexpected HTTP status: 502 Bad Gateway
that I guess is related to the quay.io outage, see https://status.redhat.com/ ?
yeah probably
Probably we can close, we can reopen if it happens again and not during a worldwide outage event as https://github.com/conda-forge/status/issues/201 .