cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

Impossible to have port security on with no security groups

Open mkjpryor opened this issue 3 years ago • 2 comments

/kind bug

What steps did you take and what happened:

I set disablePortSecurity: false and securityGroups: [] on a port, and the port got the default security groups for the instance.

What did you expect to happen:

The port to have port security on but no security groups.

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Having this will allow us to have all the anti-spoofing bits, e.g. allowed address pairs, without having to have security groups. This is important for some high-performance use cases.

mkjpryor avatar Sep 13 '22 14:09 mkjpryor

I set disablePortSecurity: false and securityGroups: [] on a port, and the port got the default security groups for the instance.

I don't remember correctly, but openstack seems honor default settings, so if you didn't give a sec group default in that tenant will be used, maybe create a sec group with different name and no rules (e.g sec group no_rule) and try again might help

jichenjc avatar Sep 14 '22 06:09 jichenjc

@mkjpryor We add them from the cluster object if they're defined there: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/18226ed88988b360f3cc88ad54960c08255ff43a/controllers/openstackmachine_controller.go#L524-L537

Do you have any defined on the cluster?

mdbooth avatar Oct 04 '22 17:10 mdbooth

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 02 '23 17:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 01 '23 18:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 03 '23 19:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 03 '23 19:03 k8s-ci-robot