cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[occm] Workaround with hostname `ip`.nip.io does not work with IPv6
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened:
We are using the settings enable-ingress-hostname=true, because we are using proxy-protocoll and therefore have to route all Traffic over the loadbalancer (as described in the docs https://github.com/kubernetes/cloud-provider-openstack/blob/master/docs/openstack-cloud-controller-manager/expose-applications-using-loadbalancer-type-service.md#use-proxy-protocol-to-preserve-client-ip).
But unfortunately this workaround does not work with IPv6, since XXX:XXX:XXX.nip.io is not a valid dns name and the controller failed to patch the service
E0513 08:39:48.529201 1 controller.go:310] error processing service xxx (will retry): failed to update load balancer status: Service "xxx" is invalid: status.loadBalancer.ingress[0].hostname: Invalid value: XXX:XXX:XXX.nip.io": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
What you expected to happen:
Honestly, I think the hostname workaround with ip.nip.io does not work properly I would prefer a proper solution with an annotation service.beta.kubernetes.io/do-loadbalancer-hostname, like already described in https://github.com/kubernetes/cloud-provider-openstack/issues/1287#issuecomment-716618469.
I know this is a temporary workaround, but with the current solution, I do not know how to properly use IPv6.
How to reproduce it:
Use the enable-ingress-hostname=true for an IPv6 loadbalancer.
Anything else we need to know?:
We are also using the workaround described in #1086.
As a note I had to edit the issue, because I messed up the references to other issues.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Too bad, I am still very interested in this issue and would also be willing to help here.
/remove-lifecycle stale