vsphere-csi-driver
vsphere-csi-driver copied to clipboard
improve logging for attempted vcenter requests
Is this a BUG REPORT or FEATURE REQUEST?: /kind bug
What happened:
On creating a new PVC the vpshere CSI Controller throws these errors:
vsphere-csi-controller-5fcd4f5bf8-gdjzw vsphere-csi-controller 2025-01-07T19:43:00.901Z ERROR vsphere/pbm.go:94 failed to get StoragePolicyID from StoragePolicyName XY with err: ServerFaultCode: Request missing value for required parameter 'profileIds' to method 'PbmRetrieveContent' {"TraceId": "46777fb8-05b6-4f3b-8a9d-05cc5de94c23"}
vsphere-csi-controller-5fcd4f5bf8-gdjzw vsphere-csi-controller-5fcd4f5bf8-gdjzw csi-provisioner E0107 19:43:00.901913 1 controller.go:957] error syncing claim "b98f5515-ae59-4ada-b867-e090ed3fe66b": failed to provision volume with StorageClass "vsphere-csi": rpc error: code = Internal desc = failed to get policy ID for storage policy name "XY ". Error: ServerFaultCode: Request missing value for required parameter 'profileIds' to method 'PbmRetrieveContent'
vsphere-csi-controller google.golang.org/grpc.(*Server).serveStreams.func2.1
vsphere-csi-controller-5fcd4f5bf8-gdjzw vsphere-csi-controller 2025-01-07T19:43:00.901Z DEBUG vanilla/controller.go:2024 createVolumeInternal: returns fault "csi.fault.Internal" {"TraceId": "46777fb8-05b6-4f3b-8a9d-05cc5de94c23"}
vsphere-csi-controller-5fcd4f5bf8-gdjzw vsphere-csi-controller 2025-01-07T19:43:00.901Z ERROR vanilla/controller.go:2029 Operation failed, reporting failure status to Prometheus. Operation Type: "create-volume", Volume Type: "block", Fault Type: "csi.fault.Internal" {"TraceId": "46777fb8-05b6-4f3b-8a9d-05cc5de94c23"}
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: vsphere-csi
parameters:
storagepolicyname: XY
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
The Storage Policy exists, I checked with govc and the UI. If it didn't there'd be other errors. The same setup works with other clusters on other vcenter instances.
What you expected to happen:
Creation of PVC
How to reproduce it (as minimally and precisely as possible):
Not sure, I just created the PVC.
Anything else we need to know?:
I believe this error Request missing value for required parameter 'profileIds' to method 'PbmRetrieveContent'stems from this govomi method: https://github.com/vmware/govmomi/blob/8f7d2338c687642c7eb7dfe17ee7b2e26d809fd3/pbm/client.go#L212
used here by the controller: https://github.com/kubernetes-sigs/vsphere-csi-driver/blob/master/pkg/common/cns-lib/vsphere
/pbm.go#L92
Environment:
- csi-vsphere version: 3.3.1
- vsphere-cloud-controller-manager version: 1.32.1
- Kubernetes version: 1.30
- vSphere version: vSphere Client version 8.0.3.00400
- OS (e.g. from /etc/os-release): Debian 12
- Kernel (e.g.
uname -a): Linux 6.1.0-27-amd64 SMP PREEMPT_DYNAMIC Debian 6.1.115-1 (2024-11-01) x86_64 GNU/Linux - Install tools: Kubeadm / ArgoCD
- Others: -
Initially I opened this because I thought it was a bug. I found out that the user was lacking the correct rights. Upon adding them it solved the issue. Nevertheless, the logs where misleading. A INFO entry suggesting that the user doesnt have permission would ease debugging by quite a bit
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen
@luanaBanana: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
@luanaBanana I have the same problem.
Before the PVCs are created and bound flawlessly. But after realizing the virtual drive in vmware are thin provisioned I tried to create a storage policy to reserve the space with thick provisioning.
I recreated the StorageClass then with a storagepolicyname with the created policy, but I got the same error.
Maybe you can try to create a StorageClass without the parameters for the storagepolicy, because this works for me.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.