cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Support different CNI plugin
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
we use calico by default now , e.g the security group is set to calico now as we envolve, support multiple CNI is reasonable way to move forward
at least, I am thinking about add https://github.com/kubeovn/kube-ovn for now ...
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
/assign
@jichenjc
Don't forget this flag that I added: https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/main/api/v1alpha6/openstackcluster_types.go#L113
I added that so that I could use Cilium
ok, great ~ Thanks for the reminder~
the idea is to e.g make different set of rules so we can distinguish them and use that
infrav1.SecurityGroupRule{
Description: "Kubernetes API",
Direction: "ingress",
EtherType: "IPv4",
PortRangeMin: 6443,
PortRangeMax: 6443,
Protocol: "tcp",
},
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
#1323 seems not best solution as it introduce configuration into code
the goal of this feature is to split the CNI settings from code to some configurations, either configmap or CRD (yaml)
- we need provide CNI settings in cluster definition
- CNI: calico
if provided CNI is not recognizable, there is no no CNI setting at all ,it will be named all
- we create multiple pre-defined configmap (stored in json) for calico, clium (first) , the name has naming conventions like secgroup-cni-calico etc, for
all, it'ssecgroup-cni-allmeans all traffic allowed like this example https://stackoverflow.com/questions/61653284/does-kubernetes-take-json-format-as-input-file-to-create-configmap-and-secret - when start, the cluster read the setting and load the config then honor it and create sec group
- user is eligible to create additional security group and create config map like secgroup-cni-xxxx then it will be load during init process
to do item
- [ ] create a config map and use it to replace existing code with configmap for calico
- [ ] allow all traffic -
allconfig map - [ ] allow multiple secgroup-cni (like cilium)
- [ ] allow custom defined new CNI
@mdbooth can you help review above to see whether you have any comments? Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale