secrets-store-csi-driver
secrets-store-csi-driver copied to clipboard
separate toleration daemonset and crd upgrade hook
What type of PR is this? Adding option to separate out toleration for daemonset and crd upgrade hook. The following values.yaml file:
linux:
enabled: true
image:
repository: registry.k8s.io/csi-secrets-store/driver
tag: v1.4.3
#digest: sha256:
pullPolicy: IfNotPresent
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# An empty key with operator Exists matches all keys, values and effects which means this will tolerate everything.
tolerations:
- operator: "Exists"
crds:
enabled: true
image:
repository: registry.k8s.io/csi-secrets-store/driver-crds
tag: v1.4.3
pullPolicy: IfNotPresent
## Optionally override resource limits for crd hooks(jobs)
resources: {}
# requests:
# cpu: "100m"
# memory: "128Mi"
# limits:
# cpu: "500m"
# memory: "512Mi"
annotations: {}
podLabels: {}
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# An empty key with operator Exists matches all keys, values and effects which means this will tolerate everything.
tolerations:
- operator: "Exists-crds"
## Prevent the CSI driver from being scheduled on virtual-kubelet nodes
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
driver:
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 50m
memory: 100Mi
registrarImage:
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.10.0
#digest: sha256:
pullPolicy: IfNotPresent
registrar:
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
logVerbosity: 5
livenessProbeImage:
repository: registry.k8s.io/sig-storage/livenessprobe
tag: v2.12.0
#digest: sha256:
pullPolicy: IfNotPresent
livenessProbe:
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
kubeletRootDir: /var/lib/kubelet
providersDir: /var/run/secrets-store-csi-providers
additionalProvidersDirs:
- /etc/kubernetes/secrets-store-csi-providers
nodeSelector: {}
metricsAddr: ":8095"
env: []
priorityClassName: ""
daemonsetAnnotations: {}
podAnnotations: {}
podLabels: {}
# volumes is a list of volumes made available to secrets store csi driver.
volumes: null
# - name: foo
# emptyDir: {}
# volumeMounts is a list of volumeMounts for secrets store csi driver.
volumeMounts: null
# - name: foo
# mountPath: /bar
# readOnly: true
windows:
enabled: false
image:
repository: registry.k8s.io/csi-secrets-store/driver
tag: v1.4.3
#digest: sha256:
pullPolicy: IfNotPresent
## Prevent the CSI driver from being scheduled on virtual-kubelet nodes
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
driver:
resources:
limits:
cpu: 400m
memory: 400Mi
requests:
cpu: 100m
memory: 100Mi
registrarImage:
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.10.0
#digest: sha256:
pullPolicy: IfNotPresent
registrar:
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
logVerbosity: 5
livenessProbeImage:
repository: registry.k8s.io/sig-storage/livenessprobe
tag: v2.12.0
#digest: sha256:
pullPolicy: IfNotPresent
livenessProbe:
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
kubeletRootDir: C:\var\lib\kubelet
providersDir: C:\\k\\secrets-store-csi-providers
additionalProvidersDirs:
nodeSelector: {}
# ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/
# An empty key with operator Exists matches all keys, values and effects which means this will tolerate everything.
tolerations:
- operator: "Exists"
metricsAddr: ":8095"
env: []
priorityClassName: ""
daemonsetAnnotations: {}
podAnnotations: {}
podLabels: {}
# volumes is a list of volumes made available to secrets store csi driver.
volumes: null
# - name: foo
# emptyDir: {}
# volumeMounts is a list of volumeMounts for secrets store csi driver.
volumeMounts: null
# - name: foo
# mountPath: /bar
# readOnly: true
# log level. Uses V logs (klog)
logVerbosity: 0
# logging format JSON
logFormatJSON: false
livenessProbe:
port: 9808
logLevel: 2
## Maximum size in bytes of gRPC response from plugins
maxCallRecvMsgSize: 4194304
## Install Default RBAC roles and bindings
rbac:
install: true
pspEnabled: false
## Install RBAC roles and bindings required for K8S Secrets syncing if true
syncSecret:
enabled: false
## Enable secret rotation feature [alpha]
enableSecretRotation: false
## Secret rotation poll interval duration
rotationPollInterval:
## Provider HealthCheck
providerHealthCheck: false
## Provider HealthCheck interval
providerHealthCheckInterval: 2m
imagePullSecrets: []
## This allows CSI drivers to impersonate the pods that they mount the volumes for.
# refer to https://kubernetes-csi.github.io/docs/token-requests.html for more details.
# Supported only for Kubernetes v1.20+
tokenRequests: []
# - audience: aud1
# - audience: aud2
# -- Labels to apply to all resources
commonLabels: {}
# team_name: dev
Will generate the following manifests:
---
# Source: secrets-store-csi-driver/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: secrets-store-csi-driver
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasses-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
name: secretproviderclasses-admin-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasses-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: secretproviderclasses-viewer-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasspodstatuses-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: secretproviderclasspodstatuses-viewer-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secretproviderclasses-role
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses/status
verbs:
- get
- patch
- update
- apiGroups:
- storage.k8s.io
resourceNames:
- secrets-store.csi.k8s.io
resources:
- csidrivers
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: secretproviderclasses-rolebinding
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: secretproviderclasses-role
subjects:
- kind: ServiceAccount
name: secrets-store-csi-driver
namespace: default
---
# Source: secrets-store-csi-driver/templates/secrets-store-csi-driver.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: release-name-secrets-store-csi-driver
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
spec:
selector:
matchLabels:
app: secrets-store-csi-driver
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
kubectl.kubernetes.io/default-container: secrets-store
spec:
serviceAccountName: secrets-store-csi-driver
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- name: node-driver-registrar
image: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0"
args:
- --v=5
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: secrets-store
image: "registry.k8s.io/csi-secrets-store/driver:v1.4.3"
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
- "--provider-volume=/var/run/secrets-store-csi-providers"
- "--additional-provider-volume-paths=/etc/kubernetes/secrets-store-csi-providers"
- "--metrics-addr=:8095"
- "--provider-health-check-interval=2m"
- "--max-call-recv-msg-size=4194304"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
ports:
- containerPort: 9808
name: healthz
protocol: TCP
- containerPort: 8095
name: metrics
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 15
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
- name: providers-dir
mountPath: /var/run/secrets-store-csi-providers
- name: providers-dir-0
mountPath: "/etc/kubernetes/secrets-store-csi-providers"
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 50m
memory: 100Mi
- name: liveness-probe
image: "registry.k8s.io/sig-storage/livenessprobe:v2.12.0"
imagePullPolicy: IfNotPresent
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --http-endpoint=0.0.0.0:9808
- -v=2
volumeMounts:
- name: plugin-dir
mountPath: /csi
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-secrets-store/
type: DirectoryOrCreate
- name: providers-dir
hostPath:
path: /var/run/secrets-store-csi-providers
type: DirectoryOrCreate
- name: providers-dir-0
hostPath:
path: "/etc/kubernetes/secrets-store-csi-providers"
type: DirectoryOrCreate
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
---
# Source: secrets-store-csi-driver/templates/csidriver.yaml
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: secrets-store.csi.k8s.io
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
spec:
podInfoOnMount: true
attachRequired: false
# Added in Kubernetes 1.16 with default mode of Persistent. Secrets store csi driver needs Ephermeral to be set.
volumeLifecycleModes:
- Ephemeral
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "create", "update", "patch"]
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "patch"]
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
subjects:
- kind: ServiceAccount
name: release-name-secrets-store-csi-driver-upgrade-crds
namespace: default
roleRef:
kind: ClusterRole
name: release-name-secrets-store-csi-driver-upgrade-crds
apiGroup: rbac.authorization.k8s.io
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
subjects:
- kind: ServiceAccount
name: release-name-secrets-store-csi-driver-keep-crds
namespace: default
roleRef:
kind: ClusterRole
name: release-name-secrets-store-csi-driver-keep-crds
apiGroup: rbac.authorization.k8s.io
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: secrets-store-csi-driver-upgrade-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "10"
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
spec:
backoffLimit: 3
template:
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
spec:
serviceAccountName: release-name-secrets-store-csi-driver-upgrade-crds
restartPolicy: Never
containers:
- name: crds-upgrade
image: "registry.k8s.io/csi-secrets-store/driver-crds:v1.4.3"
args:
- apply
- -f
- crds/
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists-crds
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: secrets-store-csi-driver-keep-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-weight: "20"
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
spec:
backoffLimit: 3
template:
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
spec:
serviceAccountName: release-name-secrets-store-csi-driver-keep-crds
restartPolicy: Never
containers:
- name: crds-keep
image: "registry.k8s.io/csi-secrets-store/driver-crds:v1.4.3"
args:
- patch
- crd
- secretproviderclasses.secrets-store.csi.x-k8s.io
- secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io
- -p
- '{"metadata":{"annotations": {"helm.sh/resource-policy": "keep"}}}'
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists-crds
Added backward compatibility by defaulting to the linux.tolerations when no explicit tolerations are provided. with the default values we get:
---
# Source: secrets-store-csi-driver/templates/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: secrets-store-csi-driver
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasses-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
name: secretproviderclasses-admin-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasses-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: secretproviderclasses-viewer-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role-secretproviderclasspodstatuses-viewer.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: secretproviderclasspodstatuses-viewer-role
rules:
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: secretproviderclasses-role
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasses
verbs:
- get
- list
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- secrets-store.csi.x-k8s.io
resources:
- secretproviderclasspodstatuses/status
verbs:
- get
- patch
- update
- apiGroups:
- storage.k8s.io
resourceNames:
- secrets-store.csi.k8s.io
resources:
- csidrivers
verbs:
- get
- list
- watch
---
# Source: secrets-store-csi-driver/templates/role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: secretproviderclasses-rolebinding
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: secretproviderclasses-role
subjects:
- kind: ServiceAccount
name: secrets-store-csi-driver
namespace: default
---
# Source: secrets-store-csi-driver/templates/secrets-store-csi-driver.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: release-name-secrets-store-csi-driver
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
spec:
selector:
matchLabels:
app: secrets-store-csi-driver
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
kubectl.kubernetes.io/default-container: secrets-store
spec:
serviceAccountName: secrets-store-csi-driver
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: type
operator: NotIn
values:
- virtual-kubelet
containers:
- name: node-driver-registrar
image: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.10.0"
args:
- --v=5
- --csi-address=/csi/csi.sock
- --kubelet-registration-path=/var/lib/kubelet/plugins/csi-secrets-store/csi.sock
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
- name: secrets-store
image: "registry.k8s.io/csi-secrets-store/driver:v1.4.3"
args:
- "--endpoint=$(CSI_ENDPOINT)"
- "--nodeid=$(KUBE_NODE_NAME)"
- "--provider-volume=/var/run/secrets-store-csi-providers"
- "--additional-provider-volume-paths=/etc/kubernetes/secrets-store-csi-providers"
- "--metrics-addr=:8095"
- "--provider-health-check-interval=2m"
- "--max-call-recv-msg-size=4194304"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
ports:
- containerPort: 9808
name: healthz
protocol: TCP
- containerPort: 8095
name: metrics
protocol: TCP
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: healthz
initialDelaySeconds: 30
timeoutSeconds: 10
periodSeconds: 15
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: Bidirectional
- name: providers-dir
mountPath: /var/run/secrets-store-csi-providers
- name: providers-dir-0
mountPath: "/etc/kubernetes/secrets-store-csi-providers"
resources:
limits:
cpu: 200m
memory: 200Mi
requests:
cpu: 50m
memory: 100Mi
- name: liveness-probe
image: "registry.k8s.io/sig-storage/livenessprobe:v2.12.0"
imagePullPolicy: IfNotPresent
args:
- --csi-address=/csi/csi.sock
- --probe-timeout=3s
- --http-endpoint=0.0.0.0:9808
- -v=2
volumeMounts:
- name: plugin-dir
mountPath: /csi
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 10m
memory: 20Mi
volumes:
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins/csi-secrets-store/
type: DirectoryOrCreate
- name: providers-dir
hostPath:
path: /var/run/secrets-store-csi-providers
type: DirectoryOrCreate
- name: providers-dir-0
hostPath:
path: "/etc/kubernetes/secrets-store-csi-providers"
type: DirectoryOrCreate
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
---
# Source: secrets-store-csi-driver/templates/csidriver.yaml
apiVersion: storage.k8s.io/v1
kind: CSIDriver
metadata:
name: secrets-store.csi.k8s.io
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
spec:
podInfoOnMount: true
attachRequired: false
# Added in Kubernetes 1.16 with default mode of Persistent. Secrets store csi driver needs Ephermeral to be set.
volumeLifecycleModes:
- Ephemeral
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "create", "update", "patch"]
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
rules:
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: ["get", "patch"]
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "1"
subjects:
- kind: ServiceAccount
name: release-name-secrets-store-csi-driver-upgrade-crds
namespace: default
roleRef:
kind: ClusterRole
name: release-name-secrets-store-csi-driver-upgrade-crds
apiGroup: rbac.authorization.k8s.io
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
helm.sh/hook-weight: "2"
subjects:
- kind: ServiceAccount
name: release-name-secrets-store-csi-driver-keep-crds
namespace: default
roleRef:
kind: ClusterRole
name: release-name-secrets-store-csi-driver-keep-crds
apiGroup: rbac.authorization.k8s.io
---
# Source: secrets-store-csi-driver/templates/crds-upgrade-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: secrets-store-csi-driver-upgrade-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-weight: "10"
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
spec:
backoffLimit: 3
template:
metadata:
name: release-name-secrets-store-csi-driver-upgrade-crds
spec:
serviceAccountName: release-name-secrets-store-csi-driver-upgrade-crds
restartPolicy: Never
containers:
- name: crds-upgrade
image: "registry.k8s.io/csi-secrets-store/driver-crds:v1.4.3"
args:
- apply
- -f
- crds/
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
---
# Source: secrets-store-csi-driver/templates/keep-crds-upgrade-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: secrets-store-csi-driver-keep-crds
namespace: default
labels:
app.kubernetes.io/instance: "release-name"
app.kubernetes.io/managed-by: "Helm"
app.kubernetes.io/name: "secrets-store-csi-driver"
app.kubernetes.io/version: "1.4.3"
app: secrets-store-csi-driver
helm.sh/chart: "secrets-store-csi-driver-1.4.3"
annotations:
helm.sh/hook: pre-upgrade
helm.sh/hook-weight: "20"
helm.sh/hook-delete-policy: "hook-succeeded,before-hook-creation"
spec:
backoffLimit: 3
template:
metadata:
name: release-name-secrets-store-csi-driver-keep-crds
spec:
serviceAccountName: release-name-secrets-store-csi-driver-keep-crds
restartPolicy: Never
containers:
- name: crds-keep
image: "registry.k8s.io/csi-secrets-store/driver-crds:v1.4.3"
args:
- patch
- crd
- secretproviderclasses.secrets-store.csi.x-k8s.io
- secretproviderclasspodstatuses.secrets-store.csi.x-k8s.io
- -p
- '{"metadata":{"annotations": {"helm.sh/resource-policy": "keep"}}}'
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/os: linux
tolerations:
- operator: Exists
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, using fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when the PR gets merged):
Fixes #1535
Special notes for your reviewer:
TODOs:
- [x] squashed commits
- [x] includes documentation
- [ ] adds unit tests
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 35.71%. Comparing base (
87f51ec) to head (a026885). Report is 21 commits behind head on main.
Additional details and impacted files
@@ Coverage Diff @@
## main #1538 +/- ##
==========================================
- Coverage 35.83% 35.71% -0.12%
==========================================
Files 63 63
Lines 3759 3757 -2
==========================================
- Hits 1347 1342 -5
- Misses 2268 2272 +4
+ Partials 144 143 -1
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Hey @sstarcher any update on this one?
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: omerap12, sstarcher Once this PR has been reviewed and has the lgtm label, please assign tam7t for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.