Wildcard routes only allow FQDNs with a single label in the wildcard section
[provide a description of the issue]
We have a requirement to allow multiple FQDNs for a specific suffix, to be processed by a TLS pass through route.
For example: when i configure a TLS passthrough route, in the route yaml config, i have "wildcard.test.com" with wildcardPolicy: Subdomain.
The haproxy router's config (file: /var/lib/haproxy/conf/os_sni_passthrough.map) is programmed with the following regex: ^[^.]*.test.com$ 1
This regex allows: a.test.com anydomain.test.com 888.test.com
but does not allow:
a.b.test.com email.office.subdomain.test.com
etc.
If the regex programmed into haproxy config would be: ^(.*.)?test.com$ This would allow multiple labels for the given suffix.
Version
[provide output of the openshift version or oc version command]
sh-4.4# oc version Client Version: 4.11.0-202303240327.p0.gdea6f47.assembly.stream-dea6f47 Kustomize Version: v4.5.4 Kubernetes Version: v1.24.11+af0420d sh-4.4#
Steps To Reproduce
- [step 1] Use the route config such as the following:
apiVersion: route.openshift.io/v1 kind: Route metadata: name: test-route spec: host: wildcard.test.com wildcardPolicy: Subdomain to: kind: Service name: test-service port: targetPort: 443 tls: termination: passthrough
- [step 2]
Current Result
The haproxy router's config (file: /var/lib/haproxy/conf/os_sni_passthrough.map) is programmed with the following regex: ^[^.]*.test.com$ 1
This regex allows: a.test.com anydomain.test.com 888.test.com
but does not allow:
a.b.test.com email.office.subdomain.test.com
Expected Result
Need Wildcard routes to allow multiple labels in the wildcard section of the FQDN. For example: a.b.test.com email.office.subdomain.test.com
Additional Information
[try to run $ oc adm diagnostics (or oadm diagnostics) command if possible]
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5]
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten /remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen. Mark the issue as fresh by commenting/remove-lifecycle rotten. Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.