argocd-vault-plugin
argocd-vault-plugin copied to clipboard
RBAC required by plugin is over permissive
Bug description
When using the sidecar deployment and the proper RBAC permissions for the argocd-repo-server, if you try to restrict the RBAC access to the secret specific name, you will receive an error, as in the example below:
kustomize build . | argocd-vault-plugin generate -s vault-configuration -
The argocd-repo-server container logs the following:
1: Error: secrets "vault-configuration" is forbidden: User "system:serviceaccount:argocd:argocd-repo-server" cannot get resource "secrets" in API group "" in the namespace "argocd"
I have a few questions regarding this implementation:
- Why does it try to list every secret, if I specified a secret on application runtime?
- Is it considered best practice to keep the permissions on secret access that broad, when accessing the required resources for the plugin?
Once I changed to allow access to every secret in the namespace, the plugin worked as intended.
Manifests used
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argocd-repo-server
rules:
- apiGroups:
- ""
resourceNames:
- vault-configuration
resources:
- secrets
verbs:
- get
- watch
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argocd-repo-server
namespace: argocd
subjects:
- kind: ServiceAccount
name: argocd-repo-server
namespace: argocd
roleRef:
kind: Role
name: argocd-repo-server
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: ConfigMap
metadata:
name: cmp-plugin
data:
avp-kustomize.yaml: |
---
apiVersion: argoproj.io/v1alpha1
kind: ConfigManagementPlugin
metadata:
name: argocd-vault-plugin-kustomize
spec:
allowConcurrency: true
discover:
find:
command:
- find
- "."
- -name
- kustomization.yaml
generate:
command:
- sh
- "-c"
- "kustomize build . | argocd-vault-plugin generate -s vault-configuration -"
lockRepo: false
To Reproduce
Steps to reproduce the behavior:
- Apply the manifests listed above
- Create a proper
vault-configurationmanifest onargocdnamespace - Create a working
Vaultserver - Observe
argocd-repo-servercontainer stdout for error when trying to use the sidecar command
Expected behavior
- Considering this is a security-oriented application, a more restrictive policy should be implemented when allowing a container to access secrets on Kubernetes.
I encountered this issue as well.
Exactly the same here - thx @gruberdev !!!
Having same issue even with 1.15 version
I am facing the same problem (AVP version 1.17.0). In order to deal this problem I have created a clusterrole and binding.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argocd-repo-server
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argocd-repo-server
subjects:
- kind: ServiceAccount
name: argocd-repo-server
namespace: argocd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argocd-repo-server
Additionally, you need to mount a ServiceAccount token when you patch argocd-repo-server deployment.
automountServiceAccountToken: true
The reason I have created clusterrole-and-binding and not role-and-binding because I want to run Application resource outside argocd ns.
I expect the solution/provision to add (cluster)role-and-binding should be provided either by Argocd (as Framework for hosting CMPs) or explicitly by AVP.
I am also learning and experimenting with Argocd, I have no deep knowledge about it.
@werne2j what do you suggest in this case or any insight into it? Thanks