checkout icon indicating copy to clipboard operation
checkout copied to clipboard

Error: The process '/usr/bin/git' failed with exit code 128

Open arhue opened this issue 4 years ago • 105 comments

Couple of days back this action stopped working.

    - name: Checkout private tools
      uses: actions/checkout@v2
      with:
        repository: tectonic/infrastructure-helm
        token: ${{ secrets.GIT_TECHDEPLOY_TOKEN }}
        path: infrastructure-helm
        fetch-depth: 0
        ref: master
Run actions/checkout@v2
/usr/bin/docker exec  d0faea3798ca7561c881e147e6613d25f75372e481a2c181696cc87de585d470 sh -c "cat /etc/*release | grep ^ID"
Syncing repository: tectonic/infrastructure-helm
Getting Git version info
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Fetching the repository
  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/tectonic/infrastructure-helm/' not found
  The process '/usr/bin/git' failed with exit code 128
  Waiting 19 seconds before trying again
  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/tectonic/infrastructure-helm/' not found
  The process '/usr/bin/git' failed with exit code 128
  Waiting 11 seconds before trying again
  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/*
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/tectonic/infrastructure-helm/' not found
  Error: The process '/usr/bin/git' failed with exit code 128

Seems similar to this issue: https://github.com/ad-m/github-push-action/issues/76

arhue avatar Jan 06 '21 13:01 arhue

We also experienced a similar issue yesterday in one of our private repositories:

  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* +COMMIT_HASH:refs/remotes/pull/PULL_REQUEST_NUMBER/merge
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/sifive/REPOSITORY_NAME/' not found
  The process '/usr/bin/git' failed with exit code 128
  Waiting 12 seconds before trying again
  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* +COMMIT_HASH:refs/remotes/pull/PULL_REQUEST_NUMBER/merge
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/sifive/REPOSITORY_NAME/' not found
  The process '/usr/bin/git' failed with exit code 128
  Waiting 16 seconds before trying again
  /usr/bin/git -c protocol.version=2 fetch --prune --progress --no-recurse-submodules origin +refs/heads/*:refs/remotes/origin/* +refs/tags/*:refs/tags/* +COMMIT_HASH:refs/remotes/pull/PULL_REQUEST_NUMBER/merge
  remote: Repository not found.
  Error: fatal: repository 'https://github.com/sifive/REPOSITORY_NAME/' not found
  Error: The process '/usr/bin/git' failed with exit code 128

This happened at around Jan 5, 2021, 7:05 PM PST. The issue seems to have gone away now, but I just wanted to add some extra information in case if it's useful.

richardxia avatar Jan 06 '21 19:01 richardxia

Doesn't appear to be fixed for me ☹️

arhue avatar Jan 07 '21 03:01 arhue

Without ref: master this is the output:


Run actions/checkout@v2
  with:
    repository: tectonic/infrastructure-helm
    token: ***
    path: infrastructure-helm
    fetch-depth: 0
    ssh-strict: true
    persist-credentials: true
    clean: true
    lfs: false
    submodules: false
  env:
    XDG_DATA_HOME: /root/.local/share
    KUBECONFIG: /root/kubeconfig
    AWS_DEFAULT_REGION: eu-central-1
    AWS_REGION: eu-central-1
    AWS_ACCESS_KEY_ID: ***
    AWS_SECRET_ACCESS_KEY: ***
/usr/bin/docker exec  391d5591308f1b002e6fa53e803efc54eab81186507cdf92652262f21b79d9ef sh -c "cat /etc/*release | grep ^ID"
Syncing repository: tectonic/infrastructure-helm
Getting Git version info
Initializing the repository
Disabling automatic garbage collection
Setting up auth
Determining the default branch
  Retrieving the default branch name
  Not Found
  Waiting 13 seconds before trying again
  Retrieving the default branch name
  Not Found
  Waiting 17 seconds before trying again
  Retrieving the default branch name
  Error: Not Found

arhue avatar Jan 07 '21 21:01 arhue

I'm having the same issue

CFlaniganMide avatar Jan 27 '21 22:01 CFlaniganMide

same issue here

samueltadros avatar Jan 31 '21 08:01 samueltadros

did anyone reached a solution?

samueltadros avatar Jan 31 '21 08:01 samueltadros

Same here. Thanks!

jjzazuet avatar Feb 06 '21 03:02 jjzazuet

Seems to be an issue with the token handed out to the CI runner. When I used one generated manually and passed it via with.token, cloning worked fine.

barthalion avatar Feb 08 '21 13:02 barthalion

We were facing the same issue in private repos. We changed to one version older release and that seems to work.

We changed it to:

   - uses: actions/checkout@a81bbbf8298c0fa03ea29cdc473d45769f953675

(which is release 2.3.3 https://github.com/actions/checkout/releases/tag/v2.3.3 )

What is root cause though?

sgore-godaddy avatar Feb 08 '21 17:02 sgore-godaddy

Same problem too. Just getting this now

progzilla avatar Feb 26 '21 13:02 progzilla

How to solve this problem?

jayconscious avatar Feb 28 '21 15:02 jayconscious

We also ran into this same issue with all workflows using the checkout action on one of private repos. The issue in our case was with the auth token that gets generated for each run that the checkout action uses by default was not working. We were able to workaround the issue by adding our own PAT as a repo secret and having the checkout action use our token instead:

      - uses: actions/checkout@v2
        with:
          lfs: true
          token: ${{ secrets.ACCESS_TOKEN }}

There were not any changes on our end that would have caused this. The issue just seemingly popped up out of nowhere after working without a problem for months.

At the very least the checkout action could do better error reporting in this case and also check the validity of the token before doing any privileged operations.

bboles avatar Mar 12 '21 21:03 bboles

same issue here folks, and also used PAT as a workaround 👍🏼

NachoNorris avatar Mar 15 '21 16:03 NachoNorris

mine is weirder hah

checkout completes successfully but produces an annotation, and clicking the annotation brings me to the logs page but doesn't focus any logs. looking at the logs, all looks normal

EDIT: nevermind, mine is a problem that is my fault, disregard

meadowsys avatar Mar 31 '21 06:03 meadowsys

We also ran into this same issue with all workflows using the checkout action on one of private repos. The issue in our case was with the auth token that gets generated for each run that the checkout action uses by default was not working. We were able to workaround the issue by adding our own PAT as a repo secret and having the checkout action use our token instead:

Same issue here for tokens generated for schedule, workflow_dispatch, and issue_comment triggered runs. Tokens issued for pull_request triggered runs worked fine.

MartinNowak avatar Apr 07 '21 13:04 MartinNowak

saw this today too. is it just me or is github looking and behaving more like microsoft devops everyday?

jeacott1 avatar Apr 09 '21 03:04 jeacott1

I have had a support case open with Github Support for this since I initially ran into it last month and they reported that there is a pull request currently waiting to be merged to address this. They would not share any further details regarding the nature of the fix or a timeline for it to be merged/released.

bboles avatar Apr 10 '21 00:04 bboles

I get this error when trying to update docker container with testcafe to run the tests on. There is no error when it comes to containers... but git throws in this action. Extremely weird and blocking...

pavelloz avatar Apr 22 '21 12:04 pavelloz

according to similar error in fisheye docs https://confluence.atlassian.com/fishkb/non-zero-exit-code-128-error-executing-command-unable-to-find-remote-helper-for-http-305759561.html

"This ERROR is caused when you have an Environment Variable called GIT_EXEC_PATH."

jeacott1 avatar Apr 22 '21 22:04 jeacott1

Hmm. Following this trail i found https://www.xspdf.com/resolution/59948454.html which seems like an aggregation of different forums threads about it. And it seems like error 128 can mean a lot of diferent things, including wrong SSH keys. Maybe SSH keys are not propagated correctly from the GHA to Checkout action?

pavelloz avatar Apr 23 '21 11:04 pavelloz

We are encountering the same issue at the moment for an action running in a private repository.

paresy avatar May 16 '21 06:05 paresy

same here https://github.com/rfprod/nx-ng-starter/runs/2593668512

however, premerge was successful just recently https://github.com/rfprod/nx-ng-starter/runs/2593478731

rfprod avatar May 16 '21 07:05 rfprod

Same here. Action was successful 8 hours ago, now is failing returning Error: fatal: repository 'https://github.com/<owner>/<ourprivaterepo>' not found. We never experienced this before.

gat-bryszard avatar May 16 '21 07:05 gat-bryszard

There is an incident with GitHub actions https://www.githubstatus.com/

mik639 avatar May 16 '21 07:05 mik639

Thank you @mik639 I was wondering why checkout action for my latest push seem not to be successful. I'm wondering what "incident" means here though.

Ifycode avatar May 16 '21 08:05 Ifycode

Same issue here: https://github.com/status-im/js-waku/runs/2593777765 despite the incident marked as resolved.

D4nte avatar May 17 '21 03:05 D4nte

Same issue here : The process '/usr/bin/git' failed with exit code 128 Waiting 10 seconds before trying again

adityalolla avatar May 17 '21 09:05 adityalolla

Commenting "same here" is not helpful to anyone.

barthalion avatar May 17 '21 09:05 barthalion

We were facing this issue too, generating a new access token for git solved the issue, not sure why that would be neccesary

jcasilla-mahi avatar May 19 '21 14:05 jcasilla-mahi

@jcasilla-mahi my guess is that it wasn't necessary, but that github resolved whatever issues it had at the same time. - github actions is just kinda flakey, not to mention the pretty horrid user experience driving the thing.

jeacott1 avatar May 20 '21 02:05 jeacott1