azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
Disable automatic creation of storage classes in AKS
Is your feature request related to a problem?/Why is this needed Azure storage classes ("azurefile-csi","azurefile-csi-premium","managed-csi","managed-csi-premium") are automatically created. I would like to create my own storage classes and disable the automatic creation of *-csi storage classes. This is needed in order to avoid duplicate storage class configuration and to control which storage classes are available to developers.
Describe the solution you'd like in detail It would be great to have the possibility to change the configuration to enable/disable automatic storage class creation.
Describe alternatives you've considered If this is already possible, it would be great to have a documentation that describes how automatic creation of storage classes can be disabled.
it's not supported not to create built-in storage classes on AKS, while you could use this way to replace existing built-in storage classes: https://github.com/Azure/AKS/issues/118#issuecomment-708257760
I have currently nine storage classes on the AKS cluster:
$ kubectl get sc
NAME PROVISIONER
azurefile kubernetes.io/azure-file
azurefile-csi file.csi.azure.com
azurefile-csi-premium file.csi.azure.com
azurefile-premium kubernetes.io/azure-file
default (default) disk.csi.azure.com
managed kubernetes.io/azure-disk
managed-csi disk.csi.azure.com
managed-csi-premium disk.csi.azure.com
managed-premium kubernetes.io/azure-disk
I can replace the configuration of the old (kubernetes.io/azure-file) and new (file.csi.azure.com) storage classes, but since all application deployments are using the old storage classes I will end up making sure that old and new storage classes have the same configuration. As a result, I will have duplicate storage classes with identical configuration since I can't delete neither old nor new storage classes.
Are there any plans to remove the old (kubernetes.io/azure-disk) storage classes in the future?
@murech kubernetes.io/azure-disk storage classes has already been removed since aks 1.21
@andyzhangx thanks for the notice. We have aks 1.21.7 installed. Now the following storage classes are shown:
$ kubectl get sc
NAME PROVISIONER
azurefile file.csi.azure.com
azurefile-csi file.csi.azure.com
azurefile-csi-premium file.csi.azure.com
azurefile-premium file.csi.azure.com
default (default) disk.csi.azure.com
managed disk.csi.azure.com
managed-csi disk.csi.azure.com
managed-csi-premium disk.csi.azure.com
managed-premium disk.csi.azure.com
Persistent volumes need to be recreated in order to benefit from the new features. Since all storage classes have now csi drivers, I assume that it doesn't matter anymore if a persistent volume is recreated on an "old" storage class (e.g. managed) or on a "new" storage class (e.g. managed-csi).
Unfortunately, I still have every storage class twice. Deleting a storage class will recreate it again. Is there a way to be able to delete all non "-csi" storage classes (except default) with aks 1.21.7 or a later version?
@andyzhangx thanks for the notice. We have aks 1.21.7 installed. Now the following storage classes are shown:
$ kubectl get sc NAME PROVISIONER azurefile file.csi.azure.com azurefile-csi file.csi.azure.com azurefile-csi-premium file.csi.azure.com azurefile-premium file.csi.azure.com default (default) disk.csi.azure.com managed disk.csi.azure.com managed-csi disk.csi.azure.com managed-csi-premium disk.csi.azure.com managed-premium disk.csi.azure.com
Persistent volumes need to be recreated in order to benefit from the new features. Since all storage classes have now csi drivers, I assume that it doesn't matter anymore if a persistent volume is recreated on an "old" storage class (e.g. managed) or on a "new" storage class (e.g. managed-csi).
Unfortunately, I still have every storage class twice. Deleting a storage class will recreate it again. Is there a way to be able to delete all non "-csi" storage classes (except default) with aks 1.21.7 or a later version?
@murech AKS cannot delete all those built-in storage classes since old sc(e.g. azurefile) may be used by lots of existing clusters which are upgraded from old versions.
@andyzhangx we are planning to upgrade from 1.20.9 to 1.21.7. We will ask teams to migrate resp. recreate persistent volumes by using the "new" storage class names "-csi". Otherwise, teams would not benefit from the new features. After the migration, we would like to remove the "old" storage class names (non -csi). Although all our teams would have migrated to the "new" storage classes, the "old" storage classes will still exist.
My point here is that we have in the end duplicate storage classes (e.g. managed and managed-csi) with identical drivers and configuration. The only way then to prevent a team from using an "old" storage class would be to implement policies (OPA Gatekeeper, Kyverno, etc.).
I'm sorry for bothering you again, but my understanding from a migration is always to decommission the "legacy" component.
@andyzhangx we are planning to upgrade from 1.20.9 to 1.21.7. We will ask teams to migrate resp. recreate persistent volumes by using the "new" storage class names "-csi". Otherwise, teams would not benefit from the new features. After the migration, we would like to remove the "old" storage class names (non -csi). Although all our teams would have migrated to the "new" storage classes, the "old" storage classes will still exist.
My point here is that we have in the end duplicate storage classes (e.g. managed and managed-csi) with identical drivers and configuration. The only way then to prevent a team from using an "old" storage class would be to implement policies (OPA Gatekeeper, Kyverno, etc.).
I'm sorry for bothering you again, but my understanding from a migration is always to decommission the "legacy" component.
@murech thanks for the info. Since we are still in csi migration process, we still would like to keep the legacy storage classes for a period of time since some users are still depending on the old storage classes.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.