kustomize icon indicating copy to clipboard operation
kustomize copied to clipboard

Improve private repository authentication handling strategy for remote URLs

Open abstractalchemist opened this issue 3 years ago • 15 comments

Describe the bug

I am receiving an error using kustomize and pointing to a remote kustomize target in AWS CodeCommit.

Files that can reproduce the issue

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- git::https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deploymentsources

Expected output

There is a standard kubernetes deployment file

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resources: {}
status: {}

and kustomize file

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
namespace: default

Actual output

ssm-user@k8s-control:~/test-project$ kustomize build
Username for 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos': Administrator-at-308692676076
Password for 'https://Administrator-at-308692676076@git-codecommit.us-west-2.amazonaws.com/v1/repos':
Error: accumulating resources: accumulation err='accumulating resources from 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deployment': missing Resource metadata': git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128

Kustomize version

4.4.1

Platform

Linux

Additional context

I've also tried

kustomize build git::https://git-codecommit.us-west-2.amazonaws.com/v1/repos/test-flux-deployment --stack-trace

failing with the error

Username for 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos': Administrator-at-308692676076
Password for 'https://Administrator-at-308692676076@git-codecommit.us-west-2.amazonaws.com/v1/repos':
Error: git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128

I have already independently verified that the user name and password provided can access the repository via the https url when using the git binary installed on the system.

ssm-user@k8s-control:~/test-kustomize$ git version
git version 2.17.1
ssm-user@k8s-control:~/test-kustomize$

abstractalchemist avatar Nov 17 '21 01:11 abstractalchemist

@abstractalchemist: This issue is currently awaiting triage.

SIG CLI takes a lead on issue triage for this repo, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 17 '21 01:11 k8s-ci-robot

Having a similar error which I think is related to that issue: In my overlay kustomization.yaml we reference another private repository

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - "[email protected]:my-org/charts//kustomize/base/app?ref=kustomize"

The overlay and the referenced resource live in separate private repositories within the same GitHub Org.

When running kustomize build <path-to-kustomization-yaml> locally the manifests are rendered as expected. When running from within a github action it does not work and gives the following error:

Error: accumulating resources: accumulation err='accumulating resources from '[email protected]:my-org/charts//kustomize/base/app?ref=kustomize': evalsymlink failure on '/home/runner/work/gitops-shared-dev-app/gitops-shared-dev-app/kustomizations/overlays/shared-dev/[email protected]:my-org/charts/kustomize/base/app?ref=kustomize' : lstat /home/runner/work/gitops-shared-dev-app/gitops-shared-dev-app/kustomizations/overlays/shared-dev/[email protected]:my-org: no such file or directory': git cmd = '/usr/bin/git fetch --depth=1 origin kustomize': exit status 128

RappC avatar Nov 18 '21 08:11 RappC

I'm having the same exact issue, this is definitively related with the reference in the overlay to the base which is located in a different private repository but in within the same Github Org, the problem is only with the Github workflows, did you ever found a solution for this?

runderwoodcr14 avatar Feb 16 '22 07:02 runderwoodcr14

I'm running into this issue too, if someone got past the issue that would be great. What makes it weird is that the config was working we just added files to the repo we are referencing though

bradenwright avatar Feb 25 '22 19:02 bradenwright

Same here when running kustomize build . --enable_alpha_plugins from CLI

RotemBirman avatar Feb 28 '22 07:02 RotemBirman

sign I'm running into this too/a really similar error, weird thing is this was 100% work for me 2 hours ago. (from my terminal, iterm2 on Mac) then I opened a new terminal got prompted for a ohmyzsh update and now either kustomize or my shell is broken, uninstalled ohmyzsh and still broken. Not sure what's going on, but posting incase the extra info helps + want to subscribe to the thread.

Whats weird is the examples in kustomize -h & kubectl kustomize -h are also broken and they're broken for zsh & bash with slightly different error messages depending on if I use zsh or bash (not much difference between the kustomize that's baked into kubectl vs standalone kustomize) (the examples below come directly from the --help function / flag)

zsh_prompt# kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
zsh_prompt# kustomize build https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

zsh: no matches found: https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

bash_prompt# kubectl kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6
bash_prompt# kustomize build https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6

error: hit 27s timeout running '/usr/local/bin/git fetch --depth=1 origin v1.0.6'

neoakris avatar Mar 13 '22 02:03 neoakris

update was able to fix my issue (posting here incase other googlers find this)

to fix zsh I commented out kubectl autocompletion in my ~/.zshrc then added this to ~/.zshrc setopt no_nomatch

kustomize's example from kustomize build -h started working normally in zsh (and bash) after that

neoakris avatar Mar 13 '22 03:03 neoakris

This happens frequently for me, when kustomize has resources pointing to private git repos.

rahul-mourya-labs avatar Mar 16 '22 03:03 rahul-mourya-labs

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 14 '22 04:06 k8s-triage-robot

I am experiencing the same issue. Did anyone find a good solution in the meantime?

tpvasconcelos avatar Jun 15 '22 14:06 tpvasconcelos

I am also experiencing the same issue.

peterghaddad avatar Jun 27 '22 20:06 peterghaddad

It seems that there are multiple requests for better private repository authentication in kustomize. We consider this low priority because the remote URL feature was never intended to be used in production. That being said, if someone has a fleshed out proposal for a better way to authenticate private repositories, please feel free to submit that for review.

Instructions for creating a mini-proposal are here.

natasha41575 avatar Jul 06 '22 16:07 natasha41575

/retitle Improve private repository authentication handling strategy for remote URLs

natasha41575 avatar Jul 06 '22 16:07 natasha41575

@natasha41575

It seems that there are multiple requests for better private repository authentication in kustomize. We consider this low priority because the remote URL feature was never intended to be used in production.

I consider this 'feature' is absolutely key to kustomize being useful. In the same way large chunks of remote configuration can be linked in terraform, and for exactly the same usecases, in my mind this should not be a low priority. I did make a suggestion on how it 'could' work in #4690 but really kustomize needs a broader plan on this aspect generally.

jeacott1 avatar Jul 07 '22 03:07 jeacott1

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 06 '22 04:08 k8s-triage-robot

+1 I seem to be running into the same issue with gitlab private repos

Some variations i tried for gitlab

  • ssh://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>//<repo_path>
  • ssh://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>.git//<repo_path>
  • ssh://git@<gitlab_url>:<gitlab_user>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • ssh://<gitlab_user>@<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>
  • git::https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>?ref=main
  • git::https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git//<repo_path>?ref=main&timeout=120
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>//<repo_path>
  • https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>
  • ssh://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>.git//<repo_path>/?ref=main
  • ssh://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>//<repo_path>/?ref=main
  • ssh://git@<gitlab_url>:<gitlab_group>/<gitlab_repo>/<repo_path>/?ref=main

jhoelzel avatar Sep 06 '22 16:09 jhoelzel

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Oct 06 '22 19:10 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Oct 06 '22 19:10 k8s-ci-robot

Instead of using this url https://<gitlab_url>/<gitlab_group>/<gitlab_repo>/<repo_path>

I succeeded with this url https://<gitlab_url>/<gitlab_group>/<gitlab_repo>.git/<repo_path>

mfahrul avatar Oct 14 '22 10:10 mfahrul

Hi there 👋🏻

I am facing the same issue starting today... more context is below 😁

Context

  • For various reasons, we were still using kustomize 3.6.1. Everything was "still" running fine.
  • Starting today, with the same Kustomize version, I get the issue where HTTP credentials are requested multiple times during the build command as stated here.
$ kustomize build --enable_alpha_plugins > full.yaml

Username for 'https://github.com': xakraz  
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 
Username for 'https://github.com': xakraz
Password for 'https://[email protected]': 

Tests

  • ✔️ I have tried to pull the repo manually through git as explained, it works without prompt
  • ✔️ I have tried to clone the repo manually through git with the same mentioned URL as explained, it works without prompt
  • ❌ I have tried to update to kustomize 4.5.7 and have updated the remote URL for that particular repo (moving away from the go-getter URL format) and it does NOT work
    • same git credentials prompt issue
    • + new error as mentioned previously by others:
Error: accumulating resources: accumulation err='accumulating resources from 'https://github.com/ORG_NAME/REPO_NAME.git/deployment?ref=COMMIT_SHORTSHA': 
URL is a git repository': git cmd = '/usr/bin/git fetch --depth=1 origin COMMIT_SHORTSHA': exit status 128

I have tried to run the latest command manually from the repo directory

$ git fetch --depth=1 origin COMMIT_SHORTSHA
fatal: couldn't find remote ref COMMIT_SHORTSHA

Whereas the commit exists and is reachable through GitHub WebUI 🤔

📝 When I said "today" earlier, that is because the only change that I have in mind is my upgrade from git 2.39.0 to 2.39.1 this morning ...

Update

Regarding the git error with the short commit id, here is the explanation: https://github.com/kubernetes-sigs/kustomize/issues/3761

So this "works as expected".

xakraz avatar Jan 27 '23 14:01 xakraz