SRV records are not recorded for service when using DNS proxying
Is this the right place to submit this?
- [X] This is not a security vulnerability or a crashing bug
- [X] This is not a question about how to use Istio
Bug Description
When you use DNS proxying, an A record is available for service entries, but no srv records for the ports.
ServiceEntry :
apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
name: core-api
namespace: legacy
spec:
hosts:
- core-api.legacy.svc.cluster.local
addresses:
- 10.0.1.2
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
targetPort: 9110
When I dig for my service I get a A record
$kubectl exec deployments/haproxy -- dig core-api.legacy.svc.cluster.local
; <<>> DiG 9.18.27 <<>> core-api.legacy.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 64510
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;core-api.legacy.svc.cluster.local. IN A
;; ANSWER SECTION:
core-api.legacy.svc.cluster.local. 30 IN A 10.0.1.2
;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10) (UDP)
;; WHEN: Wed Dec 04 03:04:27 UTC 2024
;; MSG SIZE rcvd: 100
but not a SRV record
$ kubectl exec deployments/haproxy -- dig -t srv _http._tcp.core-api.legacy.svc.cluster.local
; <<>> DiG 9.18.27 <<>> -t srv _http._tcp.core-api.legacy.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 4662
;; flags: qr aa rd; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: a711300339ece8af (echoed)
;; QUESTION SECTION:
;_http._tcp.core-api.legacy.svc.cluster.local. IN SRV
;; AUTHORITY SECTION:
cluster.local. 30 IN SOA ns.dns.cluster.local. hostmaster.cluster.local. 1733280056 7200 1800 86400 30
;; Query time: 4 msec
;; SERVER: 10.43.0.10#53(10.43.0.10) (UDP)
;; WHEN: Wed Dec 04 03:09:00 UTC 2024
;; MSG SIZE rcvd: 178
When I do the same with a kubernetes service, I get both a A and a SRV :
$ kubectl exec deployments/haproxy -- dig core-api.df-none.svc.cluster.local
; <<>> DiG 9.18.27 <<>> core-api.df-none.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9876
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;core-api.df-none.svc.cluster.local. IN A
;; ANSWER SECTION:
core-api.df-none.svc.cluster.local. 30 IN A 10.43.54.253
;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10) (UDP)
;; WHEN: Wed Dec 04 03:08:18 UTC 2024
;; MSG SIZE rcvd: 102
$ kubectl exec deployments/haproxy -- dig -t srv _http._tcp.core-api.df-none.svc.cluster.local
; <<>> DiG 9.18.27 <<>> -t srv _http._tcp.core-api.df-none.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38003
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 2
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1232
; COOKIE: b2f36f51a957e365 (echoed)
;; QUESTION SECTION:
;_http._tcp.core-api.df-none.svc.cluster.local. IN SRV
;; ANSWER SECTION:
_http._tcp.core-api.df-none.svc.cluster.local. 30 IN SRV 0 100 80 core-api.df-none.svc.cluster.local.
;; ADDITIONAL SECTION:
core-api.df-none.svc.cluster.local. 30 IN A 10.43.54.253
;; Query time: 0 msec
;; SERVER: 10.43.0.10#53(10.43.0.10) (UDP)
;; WHEN: Wed Dec 04 03:07:49 UTC 2024
;; MSG SIZE rcvd: 235
Version
$ istioctl version
client version: 1.24.1
control plane version: 1.24.1
data plane version: 1.24.1 (12 proxies)
kubectl version
Client Version: v1.30.6+rke2r1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.6+rke2r1
Additional Information
$ istioctl bug-report
Target cluster context: default
Running with the following config:
istio-namespace: istio-system full-secrets: false timeout (mins): 30 include: { } exclude: { Namespaces: kube-node-lease,kube-public,kube-system,local-path-storage } end-time: 2024-12-04 03:00:54.768556817 +0000 UTC
Cluster endpoint: https://127.0.0.1:6443 CLI version: version.BuildInfo{Version:"1.24.1", GitRevision:"5c178358f9c61c50d3d6149a0b05a609a0d7defd", GolangVersion:"go1.23.2", BuildStatus:"Clean", GitTag:"1.24.1"}
The following Istio control plane revisions/versions were found in the cluster: Revision default: &version.MeshInfo{version.ServerInfo{Component:"pilot", Revision:"default", Info:version.BuildInfo{Version:"1.24.1", GitRevision:"5c178358f9c61c50d3d6149a0b05a609a0d7defd", GolangVersion:"", BuildStatus:"Clean", GitTag:"1.24.1"}}}
The following proxy revisions/versions were found in the cluster: Revision default: Versions {1.24.1}
Fetching logs for the following containers:
default/curl/curl-5c7f47849d-rl659/curl default/curl/curl-5c7f47849d-rl659/istio-proxy default/haproxy/haproxy-75c58fc4d6-6n4nl/dig default/haproxy/haproxy-75c58fc4d6-6n4nl/haproxy default/haproxy/haproxy-75c58fc4d6-6n4nl/istio-proxy default/reloader-reloader/reloader-reloader-65f6ddfdf8-5tfcd/reloader-reloader df-none/core-api/core-api-c5bd5974-bddx6/nginx df-none/core-api/core-api-c5bd5974-l58hn/nginx istio-operator/istio-operator/istio-operator-868dc5cbf8-hx267/istio-operator istio-system/istio-cni-node/istio-cni-node-dwnrd/install-cni istio-system/istio-cni-node/istio-cni-node-hs2tj/install-cni istio-system/istio-cni-node/istio-cni-node-v45ln/install-cni istio-system/istio-cni-node/istio-cni-node-zxkdv/install-cni istio-system/istio-egress-gateway/istio-egress-gateway-59b97cddd5-9p6js/istio-proxy istio-system/istio-ingress-gateway/istio-ingress-gateway-86676b44bb-tt96r/istio-proxy istio-system/istio-k8s-to-legacy-gateway/istio-k8s-to-legacy-gateway-5fc6487bfc-fmmnd/istio-proxy istio-system/istio-legacy-to-k8s-gateway/istio-legacy-to-k8s-gateway-696874cc86-2k2gc/istio-proxy istio-system/istio-legacy-to-k8s-gateway/istio-legacy-to-k8s-gateway-696874cc86-h4rhf/istio-proxy istio-system/istio-legacy-to-k8s-gateway/istio-legacy-to-k8s-gateway-696874cc86-qrhd5/istio-proxy istio-system/istiod/istiod-5fb95796bc-jgxhw/discovery istio-system/ztunnel/ztunnel-4mhhm/istio-proxy istio-system/ztunnel/ztunnel-n8r8j/istio-proxy istio-system/ztunnel/ztunnel-w5r4l/istio-proxy istio-system/ztunnel/ztunnel-x54l6/istio-proxy kubernetes-dashboard/dashboard-metrics-scraper/dashboard-metrics-scraper-795895d745-lg6qm/dashboard-metrics-scraper kubernetes-dashboard/kubernetes-dashboard/kubernetes-dashboard-56cf4b97c5-j65m4/kubernetes-dashboard metallb-system/controller/controller-6dd967fdc7-tglw9/controller metallb-system/speaker/speaker-jpb8c/speaker metallb-system/speaker/speaker-pf2nd/speaker metallb-system/speaker/speaker-vd65c/speaker metallb-system/speaker/speaker-zqsw6/speaker
Fetching Istio control plane information from cluster.
Fetching CNI logs from cluster.
Running Istio analyze on all namespaces and report as below: Analysis Report: Info [IST0102] (Namespace cilium-secrets) The namespace is not enabled for Istio injection. Run 'kubectl label namespace cilium-secrets istio-injection=enabled' to enable it, or 'kubectl label namespace cilium-secrets istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace df-none) The namespace is not enabled for Istio injection. Run 'kubectl label namespace df-none istio-injection=enabled' to enable it, or 'kubectl label namespace df-none istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace ingress) The namespace is not enabled for Istio injection. Run 'kubectl label namespace ingress istio-injection=enabled' to enable it, or 'kubectl label namespace ingress istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace istio-operator) The namespace is not enabled for Istio injection. Run 'kubectl label namespace istio-operator istio-injection=enabled' to enable it, or 'kubectl label namespace istio-operator istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace kube-node-lease) The namespace is not enabled for Istio injection. Run 'kubectl label namespace kube-node-lease istio-injection=enabled' to enable it, or 'kubectl label namespace kube-node-lease istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace kube-public) The namespace is not enabled for Istio injection. Run 'kubectl label namespace kube-public istio-injection=enabled' to enable it, or 'kubectl label namespace kube-public istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace kube-system) The namespace is not enabled for Istio injection. Run 'kubectl label namespace kube-system istio-injection=enabled' to enable it, or 'kubectl label namespace kube-system istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace kubernetes-dashboard) The namespace is not enabled for Istio injection. Run 'kubectl label namespace kubernetes-dashboard istio-injection=enabled' to enable it, or 'kubectl label namespace kubernetes-dashboard istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0102] (Namespace metallb-system) The namespace is not enabled for Istio injection. Run 'kubectl label namespace metallb-system istio-injection=enabled' to enable it, or 'kubectl label namespace metallb-system istio-injection=disabled' to explicitly mark it as not needing injection. Info [IST0118] (Service kube-system/cilium-agent) Port name envoy-metrics (port: 9964, targetPort: envoy-metrics) doesn't follow the naming convention of Istio port. Info [IST0118] (Service kubernetes-dashboard/dashboard-metrics-scraper) Port name (port: 8000, targetPort: 8000) doesn't follow the naming convention of Istio port. Info [IST0118] (Service default/haproxy) Port name stats (port: 22002, targetPort: 22002) doesn't follow the naming convention of Istio port. Info [IST0118] (Service kubernetes-dashboard/kubernetes-dashboard) Port name (port: 443, targetPort: 8443) doesn't follow the naming convention of Istio port. Info [IST0118] (Service metallb-system/metallb-webhook-service) Port name (port: 443, targetPort: 9443) doesn't follow the naming convention of Istio port. Creating an archive at /home/vagrant/bug-report.tar.gz. Time used for creating the tar file is 413.911329ms. Cleaning up temporary files in /tmp/bug-report. Done.
This is accurate and not really a bug-in-implementation, its simply not implemented at all. So this is probably a feature request I suppose.
It may help to understand a bit more the use case for this?
Sorry, I thought I answered and forget. In this particular case, I want to configure a server-template stanza in HA Proxy configuration file. The thing is to have two places to hold the port, not a no go, but mildly irritating.
🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2024-12-05. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.
Created by the issue and PR lifecycle manager.