contour
contour copied to clipboard
HTTProxy returns error about `config.enableExternalNameService` when used with GatewayAPI configuration
What steps did you take and what happened:
My contour configuration is:
- contour-gateway-provisioner with two Gateway
internalfor applications usingGatewayAPIlegacyfor applications usingIngressandHTTPProxy
You can find the full gitops configuration here
The configuration of the Legacy Gateway look like this:
kind: GatewayClass
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: legacy
spec:
controllerName: projectcontour.io/gateway-controller
parametersRef:
kind: ContourDeployment
group: projectcontour.io
name: legacy
namespace: projectcontour
---
kind: ContourDeployment
apiVersion: projectcontour.io/v1alpha1
metadata:
name: legacy
spec:
runtimeSettings:
enableExternalNameService: true
contour:
deployment:
replicas: 1
envoy:
networkPublishing:
serviceAnnotations:
metallb.universe.tf/address-pool: legacy
---
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1beta1
metadata:
name: legacy
spec:
gatewayClassName: legacy
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: All
- name: https
protocol: projectcontour.io/https
port: 443
allowedRoutes:
namespaces:
from: All
When I deploy the following yaml, I have some errors:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: unifi.davinkevin.fr
spec:
virtualhost:
fqdn: unifi.davinkevin.fr
routes:
- conditions:
- prefix: /
services:
- name: unifi-proxy
port: 443
protocol: tls
---
apiVersion: v1
kind: Service
metadata:
name: unifi-proxy
annotations:
projectcontour.io/upstream-protocol.tls: 443,https
labels:
app.kubernetes.io/name: unifi
app.kubernetes.io/component: unifi-proxy
spec:
type: ExternalName
externalName: unifi.davinkevin.lan
ports:
- port: 443
name: https
protocol: TCP
targetPort: 443
I have the following state in the object status:
Name: unifi.davinkevin.fr
Namespace: unifi
Labels: kustomize.toolkit.fluxcd.io/name=unifi
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: projectcontour.io/v1
Kind: HTTPProxy
Metadata:
Creation Timestamp: 2023-09-23T16:11:17Z
Generation: 1
Resource Version: 114950990
UID: f26fcfaf-6f34-45e9-8a92-aadcd5148174
Spec:
Routes:
Conditions:
Prefix: /
Services:
Name: unifi-proxy
Port: 443
Protocol: tls
Virtualhost:
Fqdn: unifi.davinkevin.fr
Status:
Conditions:
Errors:
Message: Spec.Routes unresolved service reference: unifi/unifi-proxy is an ExternalName service, these are not currently enabled. See the config.enableExternalNameService config file setting
Reason: ServiceUnresolvedReference
Status: True
Type: ServiceError
Last Transition Time: 2023-09-23T16:11:17Z
Message: At least one error present, see Errors for details
Observed Generation: 1
Reason: ErrorPresent
Status: False
Type: Valid
Current Status: invalid
Description: At least one error present, see Errors for details
Load Balancer:
Ingress:
Ip: 192.168.100.10
Events: <none>
The IP used is the one associated with the legacy gateway.
What did you expect to happen:
I don't expect the error to be reported because the configuration is correctly set.
I've add a lot of links to the configuration and gitops repository, feel free to ask if you have any question.
NOTE: It's a gitops repository, so you have the full configuration there.
Environment:
- Contour version: 1.26.0
- Kubernetes version: (use
kubectl version): v1.27.4+k3s1 - Kubernetes installer & version: k3s
- Cloud provider or hardware configuration: None, bare metal cluster
- OS (e.g. from
/etc/os-release): Debian 11
My initial hunch is that the two different Gateways are both processing the HTTPProxy. This could be avoided by specifying an ingress class for one or the other, and specifying it on the HTTPProxy as well. Haven't had a chance to actually experiment/repro yet though.
I'll try that when possible, but I doubt both tried to process it because the IP was always the default one, eg 192.168.100.10.
And I even tried with enableExternalNameService: true set for both GatewayAPI config, same result.
FYI, I've tried this using HTTPRoute attached to a gateway with the enableExternalNameService: true, I have the following result:
Status:
Parents:
Conditions:
Last Transition Time: 2023-09-30T13:23:53Z
Message: service "mafreebox" is invalid: freebox/mafreebox is an ExternalName service, these are not currently
enabled. See the config.enableExternalNameService config file setting, service "mafreebox" is invalid: freebox/mafreebox is an Exte
rnalName service, these are not currently enabled. See the config.enableExternalNameService config file setting
Observed Generation: 1
Reason: BackendNotFound
Status: False
Type: ResolvedRefs
Last Transition Time: 2023-09-30T13:23:53Z
Message: Accepted HTTPRoute
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: projectcontour.io/gateway-controller
Parent Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: internal
With the following ContourDeployment:
apiVersion: projectcontour.io/v1alpha1
kind: ContourDeployment
metadata:
creationTimestamp: "2023-08-26T09:56:57Z"
generation: 2
labels:
kustomize.toolkit.fluxcd.io/name: projectcontour
kustomize.toolkit.fluxcd.io/namespace: flux-system
name: internal
namespace: projectcontour
resourceVersion: "114951623"
uid: dd4f4bce-ad1c-49b1-9369-ade72ee8a4f9
spec:
contour:
deployment:
replicas: 1
envoy:
networkPublishing:
serviceAnnotations:
metallb.universe.tf/address-pool: internal
runtimeSettings:
enableExternalNameService: true
ingress:
classNames:
- internal
Using an HTTPProxy, I have the following status:
Status:
Conditions:
Errors:
Message: Spec.Routes unresolved service reference: freebox/mafreebox is an ExternalName service, these are not cu
rrently enabled. See the config.enableExternalNameService config file setting
Reason: ServiceUnresolvedReference
Status: True
Type: ServiceError
Last Transition Time: 2023-09-30T13:28:56Z
Message: At least one error present, see Errors for details
Observed Generation: 1
Reason: ErrorPresent
Status: False
Type: Valid
Current Status: invalid
Description: At least one error present, see Errors for details
Load Balancer:
Ingress:
Ip: 192.168.120.1
And this time, with an ingressClassName:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: mafreebox.davinkevin.fr
spec:
ingressClassName: internal
virtualhost:
fqdn: mafreebox.davinkevin.fr
routes:
- conditions:
- prefix: /
services:
- name: mafreebox
port: 443
protocol: tls
The other possibility is that the actual ContourConfiguration CRD does not have enableExternalNameService: true. This could happen if, when the Gateway was created, the GatewayClass's parameters did not include this setting, and were only updated after the Gateway was created to specify this setting (ref https://gateway-api.sigs.k8s.io/reference/spec/#gateway.networking.k8s.io/v1.GatewayClass).
You can look at kubectl -n projectcontour get contourconfigurations -o yaml to see if this might be the case. If so, you can always edit them manually, and then restart the corresponding contour control planes.
Otherwise, I haven't fully reproduced your scenario, but I can confirm that at least for the simpler case of one Gateway, when the GatewayClass's parameters contain enableExternalNameService: true, that both HTTPProxy and HTTPRoute work correctly with externalname services.
I think you got it!
apiVersion: v1
items:
- apiVersion: projectcontour.io/v1alpha1
kind: ContourConfiguration
metadata:
creationTimestamp: "2023-08-18T20:07:41Z"
generation: 1
labels:
projectcontour.io/owning-gateway-name: legacy
name: contourconfig-legacy
namespace: projectcontour
resourceVersion: "105031962"
uid: fb95892c-bb11-46e0-8560-336cc90eafe0
spec:
envoy:
service:
name: envoy-legacy
namespace: projectcontour
gateway:
gatewayRef:
name: legacy
namespace: projectcontour
- apiVersion: projectcontour.io/v1alpha1
kind: ContourConfiguration
metadata:
creationTimestamp: "2023-08-26T09:56:57Z"
generation: 1
labels:
projectcontour.io/owning-gateway-name: internal
name: contourconfig-internal
namespace: projectcontour
resourceVersion: "106881549"
uid: dc91a661-edb5-4ec7-85d2-dc3b039e7d79
spec:
envoy:
service:
name: envoy-internal
namespace: projectcontour
gateway:
gatewayRef:
name: internal
namespace: projectcontour
ingress:
classNames:
- internal
kind: List
metadata:
resourceVersion: ""
Doing the edition is one thing, but I would like to have a reproducible solution (gitops way…). Is there a way to trigger the full recreation of the object? If I delete and recreate something?
If you delete the ContourConfiguration CR it should be recreated the next time the Gateway is reconciled (on change, on provisioner restart, etc), but per the Gateway API spec it's not intended to be continuously reconciled for changes.
tldr; even with the change in the ContourConfiguration, it's not working, with the same error 😓
So I proceeded like you mentioned:
- deletion of the ContourConfiguration
rollout restartof the deploymentcontour-gateway-provisioner
I confirm the contourconfig-internal has been recreated with the required field defined:
apiVersion: projectcontour.io/v1alpha1
kind: ContourConfiguration
metadata:
creationTimestamp: "2023-11-11T09:43:11Z"
generation: 1
labels:
projectcontour.io/owning-gateway-name: internal
name: contourconfig-internal
namespace: projectcontour
resourceVersion: "129173293"
uid: c5d90e9c-8011-4a89-8318-2d48632d35c5
spec:
enableExternalNameService: true
envoy:
service:
name: envoy-internal
namespace: projectcontour
gateway:
gatewayRef:
name: internal
namespace: projectcontour
ingress:
classNames:
- internal
Then I've created the following HTTPRoute:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: unifi.davinkevin.fr
spec:
parentRefs:
- name: internal
hostnames:
- unifi.davinkevin.fr
rules:
- matches:
- path:
value: "/"
backendRefs:
- name: unifi-proxy
port: 443
---
apiVersion: v1
kind: Service
metadata:
name: unifi-proxy
annotations:
projectcontour.io/upstream-protocol.tls: 443,https
labels:
app.kubernetes.io/name: unifi
app.kubernetes.io/component: unifi-proxy
spec:
type: ExternalName
externalName: unifi.davinkevin.lan
ports:
- port: 443
name: https
protocol: TCP
targetPort: 443
But the result is the same as described at the beginning, with the same error in the status of this HTTPRoute:
Name: unifi.davinkevin.fr
Namespace: unifi
Labels: kustomize.toolkit.fluxcd.io/name=unifi
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: gateway.networking.k8s.io/v1beta1
Kind: HTTPRoute
Metadata:
Creation Timestamp: 2023-11-11T09:53:18Z
Generation: 1
Resource Version: 129175403
UID: 229391e5-9d52-4a3e-b71b-740185d41ad7
Spec:
Hostnames:
unifi.davinkevin.fr
Parent Refs:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: internal
Rules:
Backend Refs:
Group:
Kind: Service
Name: unifi-proxy
Port: 443
Weight: 1
Matches:
Path:
Type: PathPrefix
Value: /
Status:
Parents:
Conditions:
Last Transition Time: 2023-11-11T09:53:18Z
Message: service "unifi-proxy" is invalid: unifi/unifi-proxy is an ExternalName service, these are not currently enabled. See the config.enableExternalNameService config file setting
Observed Generation: 1
Reason: BackendNotFound
Status: False
Type: ResolvedRefs
Last Transition Time: 2023-11-11T09:53:18Z
Message: Accepted HTTPRoute
Observed Generation: 1
Reason: Accepted
Status: True
Type: Accepted
Controller Name: projectcontour.io/gateway-controller
Parent Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: internal
Events: <none>
And exactly the same error with HTTPProxy:
Name: unifi.davinkevin.fr
Namespace: unifi
Labels: kustomize.toolkit.fluxcd.io/name=unifi
kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations: <none>
API Version: projectcontour.io/v1
Kind: HTTPProxy
Metadata:
Creation Timestamp: 2023-11-11T10:06:00Z
Generation: 1
Resource Version: 129178409
UID: ddfbdab5-aceb-45f4-8b48-e0f339722560
Spec:
Routes:
Conditions:
Prefix: /
Services:
Name: unifi-proxy
Port: 443
Protocol: tls
Virtualhost:
Fqdn: unifi.davinkevin.fr
Status:
Conditions:
Errors:
Message: Spec.Routes unresolved service reference: unifi/unifi-proxy is an ExternalName service, these are not currently enabled. See the config.enableExternalNameService config file setting
Reason: ServiceUnresolvedReference
Status: True
Type: ServiceError
Last Transition Time: 2023-11-11T10:06:00Z
Message: At least one error present, see Errors for details
Observed Generation: 1
Reason: ErrorPresent
Status: False
Type: Valid
Current Status: invalid
Description: At least one error present, see Errors for details
Load Balancer:
Ingress:
Ip: 192.168.100.10
Events: <none>
Configuration is still available here: https://gitlab.com/davinkevin.fr/home-server/-/tree/a650d2e2d69bbd0b76fcbfe8e6609137d46c096f/unifi/overlays/k8s-server/proxy
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
Any news? /unstale
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
unstale please 🙏
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
The Contour project currently lacks enough contributors to adequately respond to all Issues.
This bot triages Issues according to the following rules:
- After 60d of inactivity, lifecycle/stale is applied
- After 30d of inactivity since lifecycle/stale was applied, the Issue is closed
You can:
- Mark this Issue as fresh by commenting
- Close this Issue
- Offer to help out with triage
Please send feedback to the #contour channel in the Kubernetes Slack
Still a topic we are very interested about 😇
I came up with https://gist.github.com/skriss/ae3835b964d8ba9477459a2b45c30c8d which seems to work as expected. There is one Gateway that does not support ExternalName services and does not specify an ingress class, a second Gateway that does support ExternalName services and does specify an ingress class, and an HTTPProxy that routes to an ExternalName service and has an ingress class (matching the second Gateway). The HTTPProxy is correctly processed by only the second Gateway and can route traffic. You can see in the logs for the first Gateway's contour deployment that it doesn't process the HTTPProxy because it doesn't have a matching ingress class. You can also try removing the ingress class from the HTTPProxy, and see that it is then processed by the first Gateway, and receives an error condition since the first Gateway doesn't support ExternalName.
Reviewing the thread on this, I think the other piece that may be missing for you is that after the ContourConfiguration CR is updated to have all the desired settings, the associated Contour deployment needs to be restarted for those settings to take effect. So if you edit a ContourDeployment to add/update runtimeSettings, in order to get those to flow through to the associated Gateway you need to:
- delete the ContourConfiguration CR for the Gateway
- restart the gateway provisioner so it re-reconciles the Gateway and creates a new ContourConfiguration CR
- restart the Gateway's contour deployment so the new ContourConfiguration CR takes effect
By the way, I think we can definitely document better how the following work:
- running multiple Gateways, with one intended for HTTPProxies (i.e. use ingress class to tie HTTPProxies to a specific Gateway)
- changing Gateway settings by editing ContourConfigurations and restarting contour deployment
Thank you for your answer. I have to find time to reproduce it… I'll do that as soon as possible.