argo-cd
argo-cd copied to clipboard
Error accumulating resources when using a plugin and overlays points to another repo
Checklist:
- [x] I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- [x] I've included steps to reproduce the bug.
- [x] I've pasted the output of
argocd version
.
Describe the bug
I have the successfully setup a plugin called argocd-vault-plugin and am using it with AWS Secrets and this is true when my Kustomize files all live in the same repo.
Following is the configManagementPlugin section, where you can see the first step is just to run kustomize build .
before being passed into the vault plugin.
---
data:
configManagementPlugins: |
- name: argocd-vault-plugin
generate:
command: ["argocd-vault-plugin"]
args: ["generate", "./"]
- name: argocd-vault-plugin-kustomize
generate:
command: ["sh", "-c"]
args: ["kustomize build . | argocd-vault-plugin generate -"]
I have an application where the overlays are in the same repo but within the overlays kustomization the resources point to another repo. This works fine without the plugin as I have added both repositories to Argocd so it can authenticate and pull the necessary files. Also builds fine locally when I run kustomize build path/to/app
Application implemented with the argocd-vault-plugin-kustomize
source:
path: overlays/in/this/repo
repoURL: https://github.com/repo.git
targetRevision: main
plugin:
name: argocd-vault-plugin-kustomize
When I implement the plugin i then get
rpc error: code = Unknown desc = Manifest generation error (cached): `bash -c kustomize build . | argocd-vault-plugin generate -` failed exit status 1: Error: accumulating resources accumulation err='accumulating resources from 'https://github.com/a_different_repo.git/base': URL is a git repository': git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128 Error: No manifests
I believe it is down to the authentication not being used so it is not able to pull the other repo
Example kustomization.yaml file in overlays
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
images:
- name: docker-image
newName: name
newTag: tag
resources:
- https://github.com/a_different_repo.git/base
To Reproduce
Add a path to another repo in the resources section of the kustomization.yaml
file
Add a plugin to the application sepc
Try and deploy the app
Expected behavior
For the app to build like it does when the plugin is not used
Version
argocd: v2.0.4+0842d44.dirty
BuildDate: 2021-06-23T06:31:09Z
GitCommit: 0842d448107eb1397b251e63ec4d4bc1b4efdd6e
GitTreeState: dirty
GoVersion: go1.16.5
Compiler: gc
Platform: darwin/amd64
argocd-server: v2.3.2+ecc2af9
BuildDate: 2022-03-23T00:40:57Z
GitCommit: ecc2af9dcaa12975e654cde8cbbeaffbb315f75c
GitTreeState: clean
GoVersion: go1.17.6
Compiler: gc
Platform: linux/amd64
Ksonnet Version: v0.13.1
Kustomize Version: v4.4.1 2021-11-11T23:36:27Z
Helm Version: v3.8.0+gd141386
Kubectl Version: v0.23.1
Jsonnet Version: v0.18.0
Possibly related: https://github.com/argoproj/argo-cd/issues/8820
I have the same issue, if kustomize base references other private repo argocd sync gives upper mentioned error, while "builtin" kustomize tool works fine.
My CMP looks like this:
configManagementPlugins: |
- name: kustomize2
generate:
command: ["bash", "-c", "kustomize build ."]
args: []
Version
argocd: v2.3.1+b65c169 BuildDate: 2022-03-10T22:51:09Z GitCommit: b65c1699fa2a2daa031483a3890e6911eac69068 GitTreeState: clean GoVersion: go1.17.6 Compiler: gc Platform: linux/amd64
+1
Hitting the same exact issue. Without a plugin Kustomize with private remote base works properly, but with plugin enabled it does not.
I faced the same issue in case of using CMP to kustomize building with private repository.
Env: argocd-repo-server: v2.4.0 argocd-server: v2.4.0
Hopping on this train, since I believe it's the same issue I had originally reported on #8820. We were able to find a workaround to be able to upgrade to 2.4, but I'm hoping we can eventually solve it in a cleaner way.
I stuck on same issue by updating latest version of argoCD. My kustomize manifests refers to other version of same repository using remote targets feature of kustomize and it couldn't build with plugin. Applications without plugin work well. ArgoCD version: v2.4.7
I've used Git's AskPass to inject credentials in a similar case.
Git askpass script in a configmap:
https://github.com/HariSekhon/Kubernetes-configs/blob/master/git-askpass.configmap.yaml
patch the ArgoCD repo server with this script and environment variables to use the above script and whatever standard k8s secret credentials you want:
https://github.com/HariSekhon/Kubernetes-configs/blob/master/argocd/base/argocd-git-askpass.repo-server.jsonpatch.yaml
@HariSekhon do you have to run the script manually? or define it in the plugin commands?
@kxs-sindrakumar it's executed by the git
command in the container implicitly via the GIT_ASKPASS
environment variable that you can see set in the repo server patch above.
@HariSekhon so with GIT_ASKPASS env variable set, repo server should automatically authenticate to repo yea? do I need to do anything else besides putting the configmap/patches as you mentioned above?
I have the same format of the plugin + app as OP and using your GIT_ASKPASS. Inside the env I see GIT_ASKPASS=/usr/local/bin/git_askpass.sh
But still getting manifest generation error or no manifest found. Any ideas?
@kxs-sindrakumar I've tested it using the 2 files I pasted above in ArgoCD and it worked.
This is a very generic Git mechanism that you can test on the command line by just setting the GIT_ASKPASS
environment variable to point to a script that returns the right credentials.
If you read the script in the configmap it'll just output the $GIT_USERNAME
or $GIT_USER
and $GIT_TOKEN
or $GIT_PASSWORD
which you should obviously set for the script to pick up too.
@HariSekhon my apps points to a remote repo kustomize which points to another remote repo. Both of which I have added to https in my argo. Do you have something like this?
@kxs-sindrakumar I don't do the double hop. The script could in theory account for this based on the repo path, but if you configure the repos to use https and the https creds are in the repo in Argo that should work too as I've tested it with one repo. This is in the comment here:
https://github.com/HariSekhon/Kubernetes-configs/blob/master/git-askpass.configmap.yaml#L21
@HariSekhon so i can confirm it does look like mine is a git related issue. I copied my kustomization.yaml file which is below into my repo server, then ran kustomize build . from that location and i got a prompt asking me for git username and password, if i didn't enter and just pressed enter enter, I get the error shown below.
I have your gitaskpass in there with the correct credentials but it doesn't seem to be working (i am on version 2.4.17). Any ideas?
kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- https://github.com/myorg/ce-cluster-addons/cert-manager/overlays/mytenant/myapp-qa?ref=v0.3.5
Error
Error: accumulating resources: accumulation err='accumulating resources from 'https://github.com/myorg/ce-cluster-addons/cert-manager/overlays/mytenant/myapp-qa?ref=v0.3.5': yaml: line 175: mapping values are not allowed in this context': git cmd = '/usr/bin/git fetch --depth=1 origin v0.3.5': exit status 128
@HariSekhon quick update, so the gitaskpass does work if i am running kustomize build . after shelling into the pod. But automatically it does not. Have you experienced that?
@kxs-sindrakumar @HariSekhon encountered a similar issue. Can be reproduced by following the setup laid out in the below repo. Any help is much appreciated.
https://github.com/abkura/argocd-kustomize.git
@kxs-sindrakumar @HariSekhon encountered a similar issue. Can be reproduced by following the setup laid out in the below repo. Any help is much appreciated.
https://github.com/abkura/argocd-kustomize.git
@abkura i created a ticket yesterday related to this as well. see above mentions. I think this should be a high priority item as i've seen few posts related to it and no solution yet. Who can we tag to get attention on this asap? 2.2 is gong EOL and we can't upgrade at all due to this @crenshaw-dev sos!
@HariSekhon quick update, so the gitaskpass does work if i am running kustomize build . after shelling into the pod. But automatically it does not. Have you experienced that?
Hi, I wanted to ask if there is any update or workaround for this issue. I have the same problem while setting up argocd with version 2.6.5. The workaround with askpass works when I shell into the pod but during reconciliation it runs kustomize without any errors but the output seems to be empty.
@kxs-sindrakumar @abkura @Wompipomp I did get it working with the script embedded in yaml (you just need to mount k8s secrets to env vars for the argocd-repo-server-xxxxxxxx-xxxxx
pod via the argocd-repo-server
deployment), but I didn't test it on as new versions as you guys, so it's possible that something changed between versions that broke this, although the script mechanism is pretty basic Git functionality such that if you correctly mount matching environment variables it should just work as it shouldn't be specific to anything ArgoCD does.
If you're checking out tagged bases from the same private repo, there is a simpler method in the comment in the yaml to just switch your ArgoCD app's repo checkout to use the same https url as the kustomization.yaml
base so the creds can be reused for both the checkout and the tagged base.
@HariSekhon thank you very much for the answer. :pray: Do you use the plugin via config map or via sidecar configuration? When I try the configuration via config map it seems to work fine but as this is deprecated I have the sidecar configuration which does not seem to have the credentials in the container. Also tried it with different URL versions but in vain.
When I use the credentials directly in the url of the remote base (like https://user:pw@...) it works. With askpass it works when I shell into the container and test it but not during reconciliation from argocd. It executes and finishes successfully but there is no output. Could also be a problem on my side. I will dig further when I have more time.
I used it as a configmap to drop the script into the argocd-repo-server
pod.
This sounds like an environment problem in your configuration. Ultimately this mechanism is straight Git + environment variables.
I run into a very similar issue, but related to self-signed certificate rather than authentication.
When a remote target repository is resolved inside a plugin sidecar, it does not trust the self-signed certificate for that repository despite the repository being connected and working otherwise (not as a remote target).
Furthermore, there is no error and it's hard to understand what's going on. There is no error from the command defined in generate plugin config section, but when using the same command inside init plugin section, it does produce an error in the sidecar logs as expected:
time="2023-04-17T17:11:19Z" level=error msg="`bash -c \"kustomize build .\"` failed exit status 1: Error: accumulating resources: accumulation err='accumulating resources from 'https://gitlab/it-cloud-openshift/unified-config-management//core/system/proxy/?ref=main': Get \"https://gitlab/it-cloud-openshift/unified-config-management//core/system/proxy/?ref=main\": x509: certificate signed by unknown authority': git cmd = '/usr/bin/git fetch --depth=1 origin main': exit status 128" execID=96a8f
Workaround:
env:
- name: GIT_SSL_NO_VERIFY
value: "true"
on the plugin sidecar container.
I'm attempting to get the vault plugin working with a Kustomize application and I believe I'm running into this as well. I'm trying to understand the scope of the issue to determine if it's a dealbreaker.
From the docs:
Sidecar plugin
This is a good option for a more complex plugin that would clutter the Argo CD ConfigMap. A copy of the repository is sent to the sidecar container as a tarball and processed individually per application.
Also from the docs:
generate:
command:
- sh
- "-c"
- "kustomize build . | argocd-vault-plugin generate -"
Needing to run kustomize build
inside the plugin means that "A copy of the repository is sent to the sidecar container as a tarball" is technically true, but it's not sufficient if there are any resources defined in another repository as they'll have to be resolved from inside the sidecar. And whatever magic the repo server normally does to supply credentials defined in credentialTemplates
doesn't work in plugin sidecars. Is that accurate?
Is there any way to run the kustomize build inside of the repo server and pass the output to the sidecar instead?
I've spent the last few days addressing this problem myself, and I'd like to offer up what I did as a workaround:
I have to use a Github app to authenticate to internal Github, so that means having to do a token exchange. I combined the scripts here: https://gist.github.com/rajbos/8581083586b537029fe8ab796506bec3 to write out the JWT token and POST it to our internal Github API to get a valid token. I had to add jq and a statically built curl to the tools downloader/repo server.
In addition to the scripts above, I also added
git config --global credential.helper store
echo "https://x-access-token:$token@$serverUrl" >> "${HOME}/.git-credentials"
at the end.
I wrote all of this out to a configmap which I mount into the plugin sidecar, finally, during the init phase, I run the script, which will grab a new token and write out the credentials so I'm able to auth into the internal repos.
I hope this helps anyone who may come across this issue and hopefully there's a cleaner way to do this in the future.
For the benefit of anyone else who comes across this: I had a similar problem that I thought was related to this, but wasn't using any plugins.
I was specifying my base with a https:// format but needed to use a git@ format, matching the way that the repo that the application was derived from was configured.
I missed this in the docs: https://argo-cd.readthedocs.io/en/stable/user-guide/kustomize/#private-remote-bases
For the benefit of anyone else who comes across this: I had a similar problem that I thought was related to this, but wasn't using any plugins.
I was specifying my base with a https:// format but needed to use a git@ format, matching the way that the repo that the application was derived from was configured.
I missed this in the docs: https://argo-cd.readthedocs.io/en/stable/user-guide/kustomize/#private-remote-bases
Wild. Me too. Thanks for mentioning it.
In my case:
I was getting below error:
CORRA\corra-xxx@COK-LAP-008:~$ /usr/local/bin/argocd-autopilot-linux-amd64-v0.4.15 repo bootstrap --repo https://bitbucket.org/myorg/corra-devopsdev-k8s --provider bitbucket --git-user anishasokan --log-level
debug
CORRA\corra-xxx@COK-LAP-008:~$ /usr/local/bin/argocd-autopilot-linux-amd64-v0.4.15 repo bootstrap --repo https://bitbucket.org/myorg/corra-devopsdev-k8s --provider bitbucket --git-user anishasokan --log-level
debug
DEBU[2023-11-20T16:06:51+05:30] starting with options: kube-context=devops-dev namespace=argocd repo-url="https://bitbucket.org/myorg/corra-devopsdev-k8s.git" revision=
DEBU[2023-11-20T16:06:51+05:30] running bootstrap kustomization: apiVersion: kustomize.config.k8s.io/v1beta1
configMapGenerator:
- behavior: merge
literals:
- |
repository.credentials=- passwordSecret:
key: git_token
name: autopilot-secret
url: https://bitbucket.org/
usernameSecret:
key: git_username
name: autopilot-secret
name: argocd-cm
kind: Kustomization
namespace: argocd
resources:
- github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.15
bootstrapKustPath=auto-pilot1005270814/kustomization.yaml resourcePath="github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.15"
FATA[2023-11-20T16:06:53+05:30] failed to build bootstrap manifests: failed running kustomization: accumulating resources: accumulation err='accumulating resources from 'github.com/argoproj-labs/argocd-autopilot/
manifests/base?ref=v0.4.15': evalsymlink failure on '/home/local/CORRA/corra-xxx/auto-pilot1005270814/github.com/argoproj-labs/argocd-autopilot/manifests/base?ref=v0.4.15' : lstat /home/local/CORRA/corra-xxx/auto
-pilot1005270814/github.com: no such file or directory': recursed accumulation of path '/tmp/kustomize-2603194163/manifests/base': accumulating resources: accumulation err='accumulating resources from 'https://ra
w.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml': Get "https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml": read tcp 10.10.0.128:58756->185.199.111.133:443: read: connection reset by peer': git cmd = '/usr/bin/git fetch --depth=1 origin HEAD': exit status 128
I was having kubectl-cert_manager
(kubectl plugin) , I removed it and now the /usr/local/bin/argocd-autopilot-linux-amd64-v0.4.15 repo bootstrap
is working fine.
in my case the kubectl plugin kubectl-cert_manager
was causing the issues.
thanks, anish
I also hit the same issue.I avoided it by changing the configmap. Instead of searching kustomization.yaml ,i search for *auth.yaml
discover: find: command: - sh - "-c" - "find . -name '*auth.yaml'"
This way plugging will not hit for all the kustomization.yaml where no secret is defined.
When I need to create secret ,I don't use the base config remote repo (Ideally secret should not be there in base config)
`apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: platform-gitops resources:
- platform-services-auth.yaml `