cluster-api-provider-azure
                                
                                 cluster-api-provider-azure copied to clipboard
                                
                                    cluster-api-provider-azure copied to clipboard
                            
                            
                            
                        Add support for node image upagrdes to AKS nodepool
⚠️ Cluster API Azure maintainers can ask to turn an issue-proposal into a CAEP when necessary. This is to be expected for large changes that impact multiple components, breaking changes, or new large features.
Goals
- Add node os image upgrade support for AKS nodepools from UpgradeProfile
- UpgradeLifeCycle support for node pools
Non-Goals/Future Work
User Story
As an operator I would like my AKS nodepools to have latest image applied for latest security vulnerability or os patches
Detailed Description I would like to enable auto updated my nodepools with latest images
or something like CSR request that I can approve and it will apply latest image patch
Contract changes [optional]
Data model changes [optional]
/kind proposal
/assign @sadysnaat
/area managedclusters
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/unassign @sadysnaat /help
@jackfrancis: This request has been marked as needing help from a contributor.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
/unassign @sadysnaat /help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone next
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle stale
- Mark this issue or PR as rotten with /lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity, lifecycle/staleis applied
- After 30d of inactivity since lifecycle/stalewas applied,lifecycle/rottenis applied
- After 30d of inactivity since lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with /remove-lifecycle rotten
- Close this issue or PR with /close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/assign LochanRn
This should be supported out of the box now with the CAPZ ASO v2 API. cc @nojnhuh @nawazkh
This should be supported out of the box now with the CAPZ ASO v2 API. cc @nojnhuh @nawazkh
More context: https://capz.sigs.k8s.io/topics/aso#experimental-aso-api
This should be supported out of the box now with the CAPZ ASO v2 API. cc @nojnhuh @nawazkh
More context: https://capz.sigs.k8s.io/topics/aso#experimental-aso-api
This would be the particular kind of change to apply to the flavor template to achieve this:
https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/21479a9a4c640b43e0bef028487c522c55605d06/templates/cluster-template-aks-aso.yaml
diff --git a/templates/cluster-template-aks-aso.yaml b/templates/cluster-template-aks-aso.yaml
index 29f77594e..e83d8c9f7 100644
--- a/templates/cluster-template-aks-aso.yaml
+++ b/templates/cluster-template-aks-aso.yaml
@@ -37,6 +37,9 @@ spec:
         name: ${CLUSTER_NAME}
       servicePrincipalProfile:
         clientId: msi
+      autoUpgradeProfile:
+        nodeOSUpgradeChannel: NodeImage
+        upgradeChannel: patch
   version: ${KUBERNETES_VERSION}
 ---
 apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1
The ManagedCluster resource embedded in the AzureASOManagedControlPlane has this structure: https://azure.github.io/azure-service-operator/reference/containerservice/v1api20231001/#containerservice.azure.com/v1api20231001.ManagedCluster
/close
@nojnhuh: Closing this issue.
In response to this:
This should be supported out of the box now with the CAPZ ASO v2 API. cc @nojnhuh @nawazkh
More context: https://capz.sigs.k8s.io/topics/aso#experimental-aso-api
This would be the particular kind of change to apply to the flavor template to achieve this:
https://github.com/kubernetes-sigs/cluster-api-provider-azure/blob/21479a9a4c640b43e0bef028487c522c55605d06/templates/cluster-template-aks-aso.yaml
diff --git a/templates/cluster-template-aks-aso.yaml b/templates/cluster-template-aks-aso.yaml index 29f77594e..e83d8c9f7 100644 --- a/templates/cluster-template-aks-aso.yaml +++ b/templates/cluster-template-aks-aso.yaml @@ -37,6 +37,9 @@ spec: name: ${CLUSTER_NAME} servicePrincipalProfile: clientId: msi + autoUpgradeProfile: + nodeOSUpgradeChannel: NodeImage + upgradeChannel: patch version: ${KUBERNETES_VERSION} --- apiVersion: infrastructure.cluster.x-k8s.io/v1alpha1The ManagedCluster resource embedded in the AzureASOManagedControlPlane has this structure: https://azure.github.io/azure-service-operator/reference/containerservice/v1api20231001/#containerservice.azure.com/v1api20231001.ManagedCluster
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.