cluster-api-provider-openstack
cluster-api-provider-openstack copied to clipboard
Use existing router
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.]
I tried to run the "Quick start" guide to setup cluster api on openstack but ran into some issues. I'm using a openstack setup that only allows 1 router per project. This doesn't work since the CAPO controller manager tries to create a new one. I see that if the router is named exactly as the code assumes it should be named it will reuse it. However I don't know if that's a recommended way of doing it and it's still not possible to specify the name of the router it should look for.
These are the logs from the CAPO controller manager when it's trying to reconcile it:
I0809 12:36:43.308778 1 network.go:177] controller/openstackcluster "msg"="Reconciling subnet" "cluster"="cluster-api-test-1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="OpenStackCluster" "name"="k8s-clusterapi-cluster-default-cluster-api-test-1"
I0809 12:36:43.771114 1 recorder.go:103] events "msg"="Warning" "message"="Failed to create router k8s-clusterapi-cluster-default-cluster-api-test-1: Expected HTTP response code [] when accessing [POST https://ops.elastx.cloud:9696/v2.0/routers], but got 409 instead\n{\"NeutronError\": {\"type\": \"OverQuota\", \"message\": \"Quota exceeded for resources: ['router'].\", \"detail\": \"\"}}" "object"={"kind":"OpenStackCluster","namespace":"default","name":"cluster-api-test-1","uid":"f3200d67-8f5d-46b2-a7b9-49c66d8fda8d","apiVersion":"infrastructure.cluster.x-k8s.io/v1alpha5","resourceVersion":"16131"} "reason"="Failedcreaterouter"
E0809 12:36:43.781927 1 controller.go:317] controller/openstackcluster "msg"="Reconciler error" "error"="failed to reconcile router: Expected HTTP response code [] when accessing [POST https://ops.elastx.cloud:9696/v2.0/routers], but got 409 instead\n{\"NeutronError\": {\"type\": \"OverQuota\", \"message\": \"Quota exceeded for resources: ['router'].\", \"detail\": \"\"}}" "name"="cluster-api-test-1" "namespace"="default" "reconciler group"="infrastructure.cluster.x-k8s.io" "reconciler kind"="OpenStackCluster"
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
My suggested solution would be to add a new field in the OpenStackCluster
object so you could specify an already existing router name/ID.
Just one quick question, as your quota is 1, anyway you have to create the router, what's the difference between let CAPO create or you create (so you can reuse it)? Or you want to share routers between multiple clusters?
Yea, that would definitely work and is probably what I will do now to be able to try it out. But my typical use case is to have at least 2 kubernetes clusters per openstack project and it would be nice if I could manage all of them with cluster api.
ok, from save resource perspective I think it's reasonable instead of create the router we use existing one , not sure existing code will have some gray area as previously it's 1:1 between router and CAPO cluster..
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale