vsphere-csi-driver
vsphere-csi-driver copied to clipboard
Support running on nomad and not only k8s
Currently the csi does not work with nomad as it requires an in cluster config to kubernetes api which I guess it uses internally. If this is a true CSI plugin based on the CSI standard why is the vsphere-csi only possible to run on k8s while CSI is supported by multiple other orchestrators.
Is there any plan to support other orchestrators or any workaround to get it running today I haven't found?
the log output when it crashes is: {"level":"info","time":"2020-12-13T16:58:33.965235893Z","caller":"logger/logger.go:37","msg":"Setting default log level to :"PRODUCTION""} {"level":"info","time":"2020-12-13T16:58:33.966096829Z","caller":"config/config.go:318","msg":"Could not stat /etc/cloud/csi-vsphere.conf, reading config params from env","TraceId":"943a935b-8843-4f9d-8eb6-941b3a05ccbf"} {"level":"info","time":"2020-12-13T16:58:33.966235055Z","caller":"config/config.go:272","msg":"No Net Permissions given in Config. Using default permissions.","TraceId":"943a935b-8843-4f9d-8eb6-941b3a05ccbf"} {"level":"info","time":"2020-12-13T16:58:33.96696381Z","caller":"vanilla/controller.go:94","msg":"Initializing CNS controller","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:33.967056816Z","caller":"vsphere/virtualcentermanager.go:64","msg":"Initializing defaultVirtualCenterManager...","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:33.967120375Z","caller":"vsphere/virtualcentermanager.go:66","msg":"Successfully initialized defaultVirtualCenterManager","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:33.967200622Z","caller":"vsphere/virtualcentermanager.go:110","msg":"Successfully registered VC "172.16.20.2"","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:33.967231043Z","caller":"volume/manager.go:93","msg":"Initializing new volume.defaultManager...","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:34.238289941Z","caller":"vsphere/virtualcenter.go:143","msg":"New session ID for 'VSPHERE.LOCAL\Administrator' = 526897a8-c516-8d1e-c2e5-6bbc87fd998b","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:34.238375824Z","caller":"node/manager.go:75","msg":"Initializing node.defaultManager...","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:34.238395084Z","caller":"node/manager.go:79","msg":"node.defaultManager initialized","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"info","time":"2020-12-13T16:58:34.238409932Z","caller":"kubernetes/kubernetes.go:67","msg":"k8s client using in-cluster config","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b"} {"level":"error","time":"2020-12-13T16:58:34.239021148Z","caller":"kubernetes/kubernetes.go:70","msg":"InClusterConfig failed unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b","stacktrace":"sigs.k8s.io/vsphere-csi-driver/pkg/kubernetes.NewClient\n\t/build/pkg/kubernetes/kubernetes.go:70\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*Nodes).Initialize\n\t/build/pkg/csi/service/vanilla/nodes.go:44\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).Init\n\t/build/pkg/csi/service/vanilla/controller.go:145\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service.(*service).BeforeServe\n\t/build/pkg/csi/service/service.go:121\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve.func1\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:246\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:66\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:57\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:211\ngithub.com/rexray/gocsi.Run\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:130\nmain.main\n\t/build/cmd/vsphere-csi/main.go:41\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"} {"level":"error","time":"2020-12-13T16:58:34.239128956Z","caller":"vanilla/nodes.go:46","msg":"Creating Kubernetes client failed. Err: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b","stacktrace":"sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*Nodes).Initialize\n\t/build/pkg/csi/service/vanilla/nodes.go:46\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).Init\n\t/build/pkg/csi/service/vanilla/controller.go:145\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service.(*service).BeforeServe\n\t/build/pkg/csi/service/service.go:121\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve.func1\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:246\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:66\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:57\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:211\ngithub.com/rexray/gocsi.Run\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:130\nmain.main\n\t/build/cmd/vsphere-csi/main.go:41\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"} {"level":"error","time":"2020-12-13T16:58:34.239414507Z","caller":"vanilla/controller.go:147","msg":"failed to initialize nodeMgr. err=unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined","TraceId":"153b1573-d704-47a1-bb68-347c9bb73e4b","stacktrace":"sigs.k8s.io/vsphere-csi-driver/pkg/csi/service/vanilla.(*controller).Init\n\t/build/pkg/csi/service/vanilla/controller.go:147\nsigs.k8s.io/vsphere-csi-driver/pkg/csi/service.(*service).BeforeServe\n\t/build/pkg/csi/service/service.go:121\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve.func1\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:246\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:66\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:57\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:211\ngithub.com/rexray/gocsi.Run\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:130\nmain.main\n\t/build/cmd/vsphere-csi/main.go:41\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"} {"level":"error","time":"2020-12-13T16:58:34.239493485Z","caller":"service/service.go:122","msg":"failed to init controller. Error: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined","TraceId":"943a935b-8843-4f9d-8eb6-941b3a05ccbf","stacktrace":"sigs.k8s.io/vsphere-csi-driver/pkg/csi/service.(*service).BeforeServe\n\t/build/pkg/csi/service/service.go:122\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve.func1\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:246\nsync.(*Once).doSlow\n\t/usr/local/go/src/sync/once.go:66\nsync.(*Once).Do\n\t/usr/local/go/src/sync/once.go:57\ngithub.com/rexray/gocsi.(*StoragePlugin).Serve\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:211\ngithub.com/rexray/gocsi.Run\n\t/go/pkg/mod/github.com/rexray/[email protected]/gocsi.go:130\nmain.main\n\t/build/cmd/vsphere-csi/main.go:41\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"} {"level":"info","time":"2020-12-13T16:58:34.239781643Z","caller":"service/service.go:106","msg":"configured: "csi.vsphere.vmware.com" with clusterFlavor: "VANILLA" and mode: "controller"","TraceId":"943a935b-8843-4f9d-8eb6-941b3a05ccbf"} time="2020-12-13T16:58:34Z" level=info msg="removed sock file" path=/var/lib/csi/sockets/pluginproxy/csi.sock time="2020-12-13T16:58:34Z" level=fatal msg="grpc failed" error="unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined"
@vrabbi The current focus is to make it work for Kubernetes and we are not testing it on any other container orchestrator. I'll mark this as a feature request.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Any update on this feature request?
There is no update on this. We will take it up when running CSI on a non Kubernetes platform becomes a priority.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-contributor-experience at kubernetes/community. /close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Will this feature be prioritized anytime soon? It would really help cloud engineers running containerized workloads with Nomad in their vSphere environments.
/reopen /remove-lifecycle rotten /remove-lifecycle stale
@divyenpatel: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten /remove-lifecycle stale
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I'm super confused and disappointed to find this. The old docker plugin for vsphere is deprecated with the CSI plugin being the obvious successor, but it appears to not be a CSI plugin, but rather a "CSI plugin for Kubernetes only". This isn't a simple matter of other platforms being different, but a specific decision to go outside the CSI spec and also make direct calls to Kubernetes API.
To add insult to injury the homepage mentions this:
This repository provides tools and scripts for building and testing the vSphere CSI provider. This driver is in a stable GA state and is suitable for production use. Some of the features may be in the beta phase. Please refer feature matrix for more details. vSphere CSI driver requires vSphere 6.7 U3 or higher in order to operate.
The CSI driver, when used on Kubernetes, also requires the use of the out-of-tree vSphere Cloud Provider Interface CPI.
The line that has "when used on Kubernetes" is the only mention of Kubernetes in the README, and certainly doesn't read as "This plugin is not a general CSI compliant plugin and specifically will only ever work with Kubernetes currently". Needless to say this has caused wasted time and resources and makes me wonder why we're still running on vSphere.
Anything update on CSI for Nomad in vSphere? Really considering moving to a different platform in order to implement this.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /remove-lifecycle rotten /remove-lifecycle stale
@rismoney: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen /remove-lifecycle rotten /remove-lifecycle stale
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Will this driver work on nomad?
Will this driver work on nomad?
no
does it have to be this way? seems that this is not technically csi, as that defeats the point with specific kubernetes api hooks as part of the interfacing. The mission around CSI was exposing storage and abstraction for container orchestrators, but not limited to Kubernetes. after all its not the KSI (kubernetes storage interface)... What would it take to course correct?
/reopen /remove-lifecycle rotten /remove-lifecycle stale
@divyenpatel: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten /remove-lifecycle stale
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
These bots suck. Ignorance isn't bliss. I feel like you shouldnt be allowed to claim CSI support, if it doesn't work outside k8s
I feel like you shouldnt be allowed to claim CSI support, if it doesn't work outside k8s
Agreed. In any case, did you check if there's a true CSI driver for the underlying storage you're using in vSphere? I'm thinking Dell EMC as an example.
/remove-lifecycle stale
I wanted to use this with Nomad as well. :/
I wanted to use this with Nomad as well. :/
+1