autoscaler
autoscaler copied to clipboard
MountVolume.SetUp failed for volume "tls-certs" : secret "vpa-tls-certs" not found
I am having similar issue to https://github.com/kubernetes/autoscaler/issues/3397 https://github.com/kubernetes/autoscaler/issues/2810
I have tried the suggestions in both, but I am unable to successfully setup VPA
Which component are you using?: vertical-pod-autoscaler,
What version of the component are you using?: vpa-release-0.8
What k8s version are you using (kubectl version
)?:
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10+IKS", GitCommit:"ab42d8f6e786f36f7d87dd5f50880555b6991836", GitTreeState:"clean", BuildDate:"2022-02-18T14:23:15Z", GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"}
What environment is this in?: Windows 10 deploying to Ibm Kubernetes 1.21.10_1550 using GitBash
What did you expect to happen?: Autoscaler to install successfully
What happened instead?: Got an error when running ./hack/vpa-up.sh and generating certs for Admission Controller
How to reproduce it (as minimally and precisely as possible):
-
downloaded Autoscaler
- git clone https://github.com/kubernetes/autoscaler.git
- git checkout -b vpa-release-0.8 origin vpa-release-0.8
-
$ kubectl apply -f deploy/
service/vpa-webhook created
serviceaccount/vpa-recommender created
deployment.apps/vpa-recommender created
serviceaccount/vpa-updater created
deployment.apps/vpa-updater created
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io created
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io created
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.poc.autoscaling.k8s.io unchanged
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.poc.autoscaling.k8s.io unchanged
clusterrole.rbac.authorization.k8s.io/system:metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:vpa-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:evictioner created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-actor created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-checkpoint-actor created
clusterrole.rbac.authorization.k8s.io/system:vpa-target-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-target-reader-binding created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-evictionter-binding created
serviceaccount/vpa-admission-controller created
clusterrole.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-admission-controller created
clusterrole.rbac.authorization.k8s.io/system:vpa-status-reader created
clusterrolebinding.rbac.authorization.k8s.io/system:vpa-status-reader-binding created
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalers.autoscaling.k8s.io configured
customresourcedefinition.apiextensions.k8s.io/verticalpodautoscalercheckpoints.autoscaling.k8s.io configured
- $ ./hack/vpa-up.sh
Warning: apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition
Error from server (AlreadyExists): error when creating "STDIN": customresourcedefinitions.apiextensions.k8s.io "verticalpodautoscalers.autoscaling.k8s.io" already exists
Error from server (AlreadyExists): error when creating "STDIN": customresourcedefinitions.apiextensions.k8s.io "verticalpodautoscalercheckpoints.autoscaling.k8s.io" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:metrics-reader" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-actor" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-checkpoint-actor" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:evictioner" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:metrics-reader" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-actor" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-checkpoint-actor" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-target-reader" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-target-reader-binding" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-evictionter-binding" already exists
Error from server (AlreadyExists): error when creating "STDIN": serviceaccounts "vpa-admission-controller" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-admission-controller" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-admission-controller" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-status-reader" already exists
Error from server (AlreadyExists): error when creating "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-status-reader-binding" already exists
Error from server (AlreadyExists): error when creating "STDIN": serviceaccounts "vpa-updater" already exists
Error from server (AlreadyExists): error when creating "STDIN": deployments.apps "vpa-updater" already exists
Error from server (AlreadyExists): error when creating "STDIN": serviceaccounts "vpa-recommender" already exists
Error from server (AlreadyExists): error when creating "STDIN": deployments.apps "vpa-recommender" already exists
Generating certs for the VPA Admission Controller in /tmp/vpa-certs.
Generating RSA private key, 2048 bit long modulus (2 primes)
........+++++
.......................................................................................+++++
e is 65537 (0x010001)
name is expected to be in the format /type0=value0/type1=value1/type2=... where characters may be escaped by \. This name is not in that format: 'C:/Program Files/Git/CN=vpa_webhook_ca'
problems making Certificate Request
Error from server (AlreadyExists): error when creating "STDIN": deployments.apps "vpa-admission-controller" already exists
Error from server (AlreadyExists): error when creating "STDIN": services "vpa-webhook" already exists
$ kubectl get pods -n kube-system
calico-kube-controllers-6ddb89465c-zwl9d 1/1 Running 0 24d
calico-node-7mbjd 1/1 Running 0 24d
calico-node-8tbvd 1/1 Running 0 24d
calico-node-z95wz 1/1 Running 0 24d
calico-typha-69888c54f4-9td7q 1/1 Running 0 24d
calico-typha-69888c54f4-g5jd8 1/1 Running 0 24d
calico-typha-69888c54f4-jrzlh 1/1 Running 0 24d
coredns-866dc6bbdd-kw48b 1/1 Running 0 56d
coredns-866dc6bbdd-nkff7 1/1 Running 0 56d
coredns-866dc6bbdd-znj6r 1/1 Running 0 56d
coredns-autoscaler-7dc7c85789-gsq4s 1/1 Running 0 56d
dashboard-metrics-scraper-68d584c665-6gmkh 1/1 Running 0 56d
ibm-file-plugin-f87b756fc-rc48b 1/1 Running 0 24d
ibm-iks-cluster-autoscaler-79bdd6fdb8-htm2r 1/1 Running 0 3d21h
ibm-keepalived-watcher-99hlr 1/1 Running 0 59d
ibm-keepalived-watcher-qhwxd 1/1 Running 0 59d
ibm-keepalived-watcher-zhxf4 1/1 Running 0 59d
ibm-master-proxy-static-10.209.32.132 2/2 Running 0 63d
ibm-master-proxy-static-10.209.32.133 2/2 Running 0 63d
ibm-master-proxy-static-10.209.32.161 2/2 Running 0 63d
ibm-storage-watcher-5f4d96bb69-w6p6r 1/1 Running 0 24d
ibmcloud-iks-debug-6665d56875-bc7rx 1/1 Running 0 3d21h
ibmcloud-iks-debug-daemonset-8tj4l 1/1 Running 0 3d21h
ibmcloud-iks-debug-daemonset-pvvx9 1/1 Running 0 3d21h
ibmcloud-iks-debug-daemonset-qnwkt 1/1 Running 0 3d21h
konnectivity-agent-cn4b7 1/1 Running 0 24d
konnectivity-agent-s2dhg 1/1 Running 0 24d
konnectivity-agent-wzk5r 1/1 Running 0 24d
kubernetes-dashboard-f9fc56dd9-5hs99 1/1 Running 1 56d
metrics-server-696d96fbf7-2sphb 3/3 Running 0 24d
public-crc31nk20d0agu6dvbnodg-alb1-74c46578b7-55vsp 1/1 Running 1 7d2h
public-crc31nk20d0agu6dvbnodg-alb1-74c46578b7-m6zkh 1/1 Running 0 7d2h
vpa-admission-controller-5f8bb8c868-wvqzz 0/1 ContainerCreating 0 8m42s
vpa-recommender-65bc4c87-27292 1/1 Running 0 8m47s
vpa-updater-568c8bdd7-rqq69 1/1 Running 0 8m46s
- $ kubectl describe pod vpa-admission-controller-5f8bb8c868-wvqzz -n kube-system
Namespace: kube-system
Priority: 0
Node: 10.209.32.133/10.209.32.133
Start Time: Tue, 29 Mar 2022 13:58:55 -0700
Labels: app=vpa-admission-controller
pod-template-hash=5f8bb8c868
Annotations: kubernetes.io/psp: ibm-privileged-psp
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/vpa-admission-controller-5f8bb8c868
Containers:
admission-controller:
Container ID:
Image: us.gcr.io/k8s-artifacts-prod/autoscaling/vpa-admission-controller:0.8.1
Image ID:
Port: 8000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 200m
memory: 500Mi
Requests:
cpu: 50m
memory: 200Mi
Environment:
NAMESPACE: kube-system (v1:metadata.namespace)
Mounts:
/etc/tls-certs from tls-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qk4x9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tls-certs:
Type: Secret (a volume populated by a Secret)
SecretName: vpa-tls-certs
Optional: false
kube-api-access-qk4x9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 600s
node.kubernetes.io/unreachable:NoExecute op=Exists for 600s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m54s default-scheduler Successfully assigned kube-system/vpa-admission-controller-5f8bb8c868-wvqzz to 10.209.32.133
Warning FailedMount 100s (x12 over 9m54s) kubelet MountVolume.SetUp failed for volume "tls-certs" : secret "vpa-tls-certs" not found
Warning FailedMount 64s (x4 over 7m51s) kubelet Unable to attach or mount volumes: unmounted volumes=[tls-certs], unattached volumes=[tls-certs kube-api-access-qk4x9]: timed out waiting for the condition
$ openssl version OpenSSL 1.1.1m 14 Dec 2021
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I was able to resolve this issue by running the command in WSL. This was suggested in issue #5316.
I did NOT have to modify gencerts.sh line 44 by adding an extra "/"
I was running it on a mac and the command would fail on zsh with
Error adding request extensions defined via -addext C0FAADF201000000:error:0580008C:x509 certificate routines:X509at_add1_attr_by_NID:duplicate attribute:crypto/x509/x509_att.c:194:
tried on bash, worked like a charm