cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Update ASG when AWSMachinePool subnets change
We only use subnets fields in AWSMachinePool field during ASG creation and do not update Auto Scaling Group if subnets change. UpdateASG() should return true if a change is detected in the subnets. https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6ab95625da2aabb209bc5744883eb4c009cc4fba/exp/controllers/awsmachinepool_controller.go#L534
ASG rollouts all machines and bring up new ones once updated with new VPCZoneIdentifier.
/kind feature /priority important-longterm /milestone v1.x
/assign
We need to convert existing subnets in the ASG (VPCZoneIdentifier) to AutoScalingGroup.Subnets using SDKToAutoScalingGroup() method, it is missing right now.
VPCZoneIdentifier is a comma separated list.
@sedefsavas so in here https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/6ab95625da2aabb209bc5744883eb4c009cc4fba/exp/controllers/awsmachinepool_controller.go#L534
I create a autoscaling.Group (from github.com/aws/aws-sdk-go/service/autoscaling) using the *scope.MachinePoolScope and fit that into SDKToAutoScalingGroup which returns a expinfrav1.AutoScalingGroup and compare it to existingASG *expinfrav1.AutoScalingGroup (value entered in function originally)
Does this sound appropriate?
SDKToAutoScalingGroup() should not need to be called from awsmachinepool_controller.go
1) Changes that will be done in SDKToAutoScalingGroup() method:
add Subnets field to the conversion:
i := &expinfrav1.AutoScalingGroup{
Subnets: create_slice_from_comma_separated_subnets(v.VPCZoneIdentifier)
}
You can see how we generate VPCZoneIdentifier in runPool() method: aws.String(strings.Join(i.Subnets, ", ")),
2) In awsmachinepool_controller.go, just diff existingASG with machinePoolScope.AWSMachinePool.Spec.Subnets
@sedefsavas made a PR , just a bit confused on the comparison/diff checking between existingASG and machinePoolScope.AWSMachinePool.Spec.Subnets , will follow up any changes requested. Thank you for all the help until now.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
This issue has a corresponding PR open, if folks are interested to pick.
/unassign @DiptoChakrabarty
@Ankitasw: Reopened this issue.
In response to this:
/reopen
This issue has a corresponding PR open, if folks are interested to pick.
/unassign @DiptoChakrabarty
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign