external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

Missing support multiple hostnames in API Gateway resources

Open chicco785 opened this issue 1 year ago • 2 comments

What happened:

I deployed an http route with multiple hostnames:

apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
  name: backend
spec:
  parentRefs:
    - name: default
      namespace: $NAMESPACE
  hostnames:
    - "example.com"
    - "www.example.com"
  rules:
    - backendRefs:
        - group: ""
          kind: Service
          name: backend
          port: 3000
          weight: 1
      matches:
        - path:
            type: PathPrefix
            value: /

Only example.com is recorded in route53.

What you expected to happen:

That the external dns configure dns entries for both example.com and www.example.com.

How to reproduce it (as minimally and precisely as possible):

See example above.

Anything else we need to know?:

Environment:

  • External-DNS version (use external-dns --version): 0.14.0-debian-11-r2
  • DNS provider: aws route53
  • Others:

chicco785 avatar Dec 19 '23 15:12 chicco785

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 18 '24 15:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 17 '24 16:04 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar May 17 '24 16:05 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar May 17 '24 16:05 k8s-ci-robot

Hi, @chicco785. Multiple hostnames have always been supported.

HTTPRoute hostnames must also match the Hostname on the Gateway Listener. Perhaps this was your issue?

If the Listener specifies example.com as its Hostname, it only matches that hostname exactly. In that case it wouldn't also match www.example.com. One option would be removing the Hostname value on the Listener, which would cause it to match all hostnames, but this may be undesired for security reasons. Another option would be adding an additional Listener for www.example.com or *.example.com.

https://pkg.go.dev/sigs.k8s.io/[email protected]/apis/v1#Listener

// Hostname specifies the virtual hostname to match for protocol types that
// define this concept. When unspecified, all hostnames are matched. This
// field is ignored for protocols that don't require hostname based
// matching.
//
// ...
//
// For HTTPRoute and TLSRoute resources, there is an interaction with the
// `spec.hostnames` array. When both listener and route specify hostnames,
// there MUST be an intersection between the values for a Route to be
// accepted. For more information, refer to the Route specific Hostnames
// documentation.
//
// Hostnames that are prefixed with a wildcard label (`*.`) are interpreted
// as a suffix match. That means that a match for `*.example.com` would match
// both `test.example.com`, and `foo.test.example.com`, but not `example.com`.

There are several flags that can filter which domains are handled by providers. Perhaps this was your issue?

https://github.com/kubernetes-sigs/external-dns/blob/v0.14.2/pkg/apis/externaldns/types.go#L480-L483

  • domain-filter
  • exclude-domains
  • regex-domain-filter
  • regex-domain-exclusion

abursavich avatar Jul 14 '24 02:07 abursavich