cluster-api-provider-azure icon indicating copy to clipboard operation
cluster-api-provider-azure copied to clipboard

Modify NSG IsManaged function to use tag

Open josh-ferrell opened this issue 2 years ago • 4 comments

/kind feature

Describe the solution you'd like Modify NSG IsManaged function to utilize "owned" tag

Anything else you would like to add: Dependent on #2367 being closed.

josh-ferrell avatar Jun 13 '22 11:06 josh-ferrell

@josh-ferrell could you expand a bit on the motivation here? Is it purely code cleanup? Is it to allow BYO NSG without bringing a vnet?

Things to watch out for will be

  1. How will we know to create the NSG in the first place? for vnets we just create the vnet if it doesn't exist, it never gets updated. For NSGs it's a little different because we update NSGs when updates are needed (eg. a rule is added), so we can't simply skip create/update if it already exist. Right now it relies on the fact that if the vnet is owned by CAPZ, then so is the NSG.
  2. How to ensure backcompat. We can't simply assume that all NSGs that don't have a tag are not managed since they may have been created by a previous version of CAPZ that wasn't creating the tag yet.

CecileRobertMichon avatar Jun 13 '22 22:06 CecileRobertMichon

@CecileRobertMichon I need to think about this some more. I was trying to stay consistent with #2380 Is it realistic for an end user to bring their own VNET but want CAPZ to still manage NSGs, etc.. for instance if their LAN/WAN team preprovisions peered virtual networks but leave PaaS teams to configure the rest?

josh-ferrell avatar Jun 21 '22 03:06 josh-ferrell

It's a valid use case, although so far we decided to stay away from it in CAPZ because as soon as CAPZ manages subnets/NSGs/route tables but not the VNet, it means it needs to make PUT calls to a resource it didn't create and doesn't own, since the subnets are a nested resource of the VNet. This is dangerous as it could cause race conditions if something else is updating the VNet at the same time, and we might overwrite some changes made by their LAN/WAN team. In general, the implicit rule we've adopted is "CAPZ only touches the resources it owns". We can revisit but need to be very careful about the implications of modifying a VNet that is being managed elsewhere.

Do you have real user scenario for this or is it hypothetical at this point?

CecileRobertMichon avatar Jun 21 '22 14:06 CecileRobertMichon

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 19 '22 15:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 19 '22 15:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 18 '22 16:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 18 '22 16:11 k8s-ci-robot