cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[cinder-csi-plugin] Multi-region support for provisioning Cinder storage.
/kind feature
I have a scenario where we would need to implement high-availability across two datacenters, each operating as their own OpenStack region. As both regions are in the same City with a high speed low latency (<1ms) network between then it would be possible to create a stretched cluster between the regions. (I'm aware it's not possible to create proper high-availability of etcd with just two regions, but I belive it can be managed and it's a separate issue).
We do have some components that need persistent storage in our cluster, and we would like to be able to deploy these components in Kubernetes and use the built-in region awareness to deploy apps and their volumes.
Seems it should be possible to label each node with an appropriate topology.kubernetes.io/region label and then configure the Cinder CSI to map regions to a set of OpenStack credentials. Then get kind of similar to how for example the AWS EBS CSI manages different availability zones, with the awareness that only nodes in the correct region will be able to mount the volumes.
This might be a duplicate of #1924 , but I only really care about the storage part, we wouldn't need for instance the LoadBalancer support.
OpenStack region
openstack region concept is multiple openstack cluster running and share keystone / horizon I think you are talking about AZ, which you mihgt refer to ignore-volume-az key word at https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/cinder-csi-plugin/using-cinder-csi-plugin.md
No, I'm specifically talking about region. It would be really nice if OpenStack providers has their shit together at the same level as AWS, GCP, etc.. But it seems more common than not that providers chose to deploy separate regions in the same area as opposed to deploying a big OpenStack cloud with proper availability zones set up. Due to this I think it would be good if the Cinder CSI driver would work with this scenario. In my case there is no shared keystone either, presumably to fault isolate the regions or something..
At the very least OVH and Hetzner has this availability zone less setup, and it seems more common than not with public providers and I'm guessing the same may be true for many privately hosted setups.
I'm guessing the same may be true for many privately hosted setups.
Can confirm with ~ 10 of our private clusters
Seems it should be possible to label each node with an appropriate topology.kubernetes.io/region label and then configure the Cinder CSI to map regions to a set of OpenStack credentials.
ok, this make sense, some thoughts:
- we support multiple cloud definition so this should be good
- we should be able to pass region param in controllerServer.CreateVolume
- with 1) 2), we are able to talk to different region to create volume
- during attachment, need more time to know how to find corresponding node to do the attachment (e.g same region )
as I don't have those env ,so I am happy to get someone cooperate on this topic if you like , thanks
I wrote such a MVP implementation about 1yeat ago but the PR never got reviewed.
If there is more interest in this now, I'd be happy to rebase and re-submit a PR.
Sent from a typical smartphone. If this is illiterate, it’s the voice recognition’s fault.
From: ji chen @.> Sent: Thursday, December 15, 2022 2:31:24 AM To: kubernetes/cloud-provider-openstack @.> Cc: Romain Cambier @.>; Comment @.> Subject: Re: [kubernetes/cloud-provider-openstack] [cinder-csi-plugin] Multi-region support for provisioning Cinder storage. (Issue #2035)
Seems it should be possible to label each node with an appropriate topology.kubernetes.io/region label and then configure the Cinder CSI to map regions to a set of OpenStack credentials.
ok, this make sense, some thoughts:
- we support multiple cloud definition so this should be good
- we should be able to pass region param in controllerServer.CreateVolume
- with 1) 2), we are able to talk to different region to create volume
- during attachment, need more time to know how to find corresponding node to do the attachment (e.g same region )
as I don't have those env ,so I am happy to get someone cooperate on this topic if you like , thanks
— Reply to this email directly, view it on GitHubhttps://github.com/kubernetes/cloud-provider-openstack/issues/2035#issuecomment-1352441862, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACTBVYZ4NTQVRPJ5PNWVZL3WNJYGZANCNFSM6AAAAAASCG7TCI. You are receiving this because you commented.Message ID: @.***>
please resubmit so at least I an help review , thanks for the feedback~
@cambierr could you please help submit? Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@jichenjc: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
This seems to be a sane thing.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@MatthieuFin: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Anyone to made this PR https://github.com/kubernetes/cloud-provider-openstack/pull/2551 move forward ?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/reopen
@devfaz: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.