kong
kong copied to clipboard
Inconsistent TLS passthrough connection
Is there an existing issue for this?
- [X] I have searched the existing issues
Kong version ($ kong version
)
kong 2.7.0 KIC 2.1.1
Current Behavior
I configured two tls_passthrough
services. One upstream is internal to the k8s cluster and provides an mtls interface. The second is mockbin.org
and only configured to check that there was nothing wrong with my internal upstream. Also, I can port forward the internal upstream to my host and curl it with 100% resolution so I know the pod is stable and it's not getting evicted at all so the K8s Endpoint is not changing.
Calling both of them works fine so E2E connection can be established.
The problem is about 20%-30% of the requests result in the following
2022/01/29 03:58:24 [error] 1116#0: *69377 [kong] response.lua:983 unable to proxy stream connection, status: 500, err: {"message":"An unexpected error occurred"} while prereading client data, client: REDACTED, server: unix:/usr/local/kong/stream_tls_passthrough.sock
When this error occurs it is returned immediately so there's no connection timeout happening or anything.
here's an example with curl -iv
23:00 $ curl -k --cacert /tmp/certs/ca.cert --cert /tmp/certs/client.cert --key /tmp/certs/client.key -iv REDACTED:8445/health
* Trying REDACTED...
* TCP_NODELAY set
* Connected to REDACTED (REDACTED) port 8445 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /tmp/certs/ca.cert
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to REDACTED:8445
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to REDACTED:8445
Expected Behavior
100% of requests should resolve and make it to the upstream.
Steps To Reproduce
No response
Anything else?
No response
@scirner22 Is there any more error logs before this line you pasted? Normally when error message is "An unexpected error happened" there're more error entry before telling the detail of the error.
When the request is successful this is logged.
IP_ADDR [31/Jan/2022:05:04:18 +0000] TCP 200 2535 3612 0.159
IP_ADDR [31/Jan/2022:05:04:18 +0000] TCP 200 2535 3612 0.160
Here's an example of the log for the times it fails.
IP_ADDR [31/Jan/2022:05:04:16 +0000] TCP 500 0 0 0.003
IP_ADDR [31/Jan/2022:05:04:16 +0000] TCP 200 0 0 0.007
2022/01/31 05:04:16 [error] 1114#0: *13890847 [kong] response.lua:983 unable to proxy stream connection, status: 500, err: {"message":"An unexpected error occurred"} while prereading client data, client: REDACTED, server: unix:/usr/local/kong/stream_tls_passthrough.sock
These are the only logs that are relevant for this request.
Could you change the services to be TCP services (instead of tls_passthrough) and test this for us? This will help us figure out if this is something specific to TLS pre-reading that is happening or if this is more severe.
Sorry, do you mean a typical tls terminated service? Here's an example that serves 100% of the requests as expected.
apiVersion: v1
kind: Service
metadata:
annotations:
ingress.kubernetes.io/service-upstream: "true"
labels:
app: https-mockbin
name: https-mockbin
namespace: core-banking
resourceVersion: "65885430"
uid: ac472f41-16ba-472e-b184-09c24fdf8c26
spec:
externalName: mockbin.org
ports:
- name: https-mockbin-443
port: 443
protocol: TCP
targetPort: 443
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
konghq.com/preserve-host: "true"
konghq.com/protocols: http, https
konghq.com/strip-path: "true"
kubernetes.io/ingress.class: kong
name: https-mockbin-kong
namespace: core-banking
spec:
rules:
- host: <>
http:
paths:
- backend:
service:
name: https-mockbin-kong
port:
number: 443
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- <>
secretName: <>
status:
loadBalancer:
ingress:
- ip: <>
This is more of a general question, but for tls_passthrough should the kong service upstream be a resolvable dns name (service.namespace.svc
) or a pointer to a kong upstream that has a target with service.namespace.svc:port
?
cc @Kong/team-k8s
Last night I deployed the mockbin.org tls_passthrough to two separate k8s clusters with kong. The one cluster resolved correctly 100% of the time. Later today I'm going to start looking into the dns logs on the cluster that's experiencing the inconsistencies. Slightly out of my debugging realm there, but I'll post anything I notice.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'm attempting to get more information here.
@hbagdi Here's some added information that includes the non tls_passthrough
TCPIngress as well.
Resources
apiVersion: v1
kind: Service
metadata:
annotations:
ingress.kubernetes.io/service-upstream: "true"
konghq.com/protocol: tcp
labels:
app: service-core-bank-gateway
name: service-core-bank-gateway-tls-passthrough
namespace: core-banking
spec:
ports:
- name: https-service-core-bank-gateway-443
port: 443
protocol: TCP
targetPort: 8443
selector:
app: service-core-bank-gateway
sessionAffinity: None
type: ClusterIP
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
labels:
app: service-core-bank-gateway
name: service-core-bank-gateway-tcp
namespace: core-banking
spec:
rules:
- backend:
serviceName: service-core-bank-gateway-tls-passthrough
servicePort: 443
host: REDACTED
port: 8445
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
konghq.com/protocols: tls_passthrough
kubernetes.io/ingress.class: kong
labels:
app: service-core-bank-gateway
name: service-core-bank-gateway-tls-passthrough
namespace: core-banking
spec:
rules:
- backend:
serviceName: service-core-bank-gateway-tls-passthrough
servicePort: 443
host: REDACTED
port: 8445
Hitting either of them results in the same output. Some portion of requests succeed and some portion fail fast.
Tls passthrough
success
IP_ADDR [31/Jan/2022:05:04:18 +0000] TCP 200 2535 3612 0.159
IP_ADDR [31/Jan/2022:05:04:18 +0000] TCP 200 2535 3612 0.160
curl -ivk --resolve REDACTED:8445:REDACTED https://REDACTED:8445
* Added REDACTED to DNS cache
* Hostname REDACTED was found in DNS cache
* Trying REDACTED...
* TCP_NODELAY set
* Connected to REDACTED
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Request CERT (13):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Certificate (11):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS alert, bad certificate (554):
* error:1401E412:SSL routines:CONNECT_CR_FINISHED:sslv3 alert bad certificate
* Closing connection 0
curl: (35) error:1401E412:SSL routines:CONNECT_CR_FINISHED:sslv3 alert bad certificate # note that this is mtls so that's why there's a bad cert but the service is reached at least
failure
- [17/Feb/2022:21:17:14 +0000] TCP 500 0 0 0.000
2022/02/17 21:17:14 [error] 1102#0: *7424031 [kong] response.lua:983 unable to proxy stream connection, status: 500, err: {"message":"An unexpected error occurred"} while prereading client data, client: REDACTED, server: unix:/usr/local/kong/stream_tls_passthrough.sock
- [17/Feb/2022:21:17:14 +0000] TCP 200 0 0 0.008
16:15 $ curl -ivk --resolve REDACTED:8445:REDACTED https://REDACTED:8445
* Added REDACTED to DNS cache
* Hostname REDACTED was found in DNS cache
* Trying REDACTED...
* TCP_NODELAY set
* Connected to REDACTED
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to REDACTED:8445
* Closing connection 0
curl: (35) LibreSSL SSL_connect: SSL_ERROR_SYSCALL in connection to REDACTED:8445
TCP
success
REDACTED [17/Feb/2022:21:22:11 +0000] TCP 200 7 118 0.090
REDACTED [17/Feb/2022:21:22:11 +0000] TCP 200 5677 271 0.136
curl -ivk --resolve REDACTED:8445:REDACTED https://REDACTED:8445
* Added REDACTED:8445:REDACTED to DNS cache
* Hostname REDACTED was found in DNS cache
* Trying REDACTED...
* TCP_NODELAY set
* Connected to REDACTED (REDACTED) port 8445 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=REDACTED
* start date: Nov 9 21:36:39 2021 GMT
* expire date: Dec 11 21:36:39 2022 GMT
* issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certs.godaddy.com/repository/; CN=Go Daddy Secure Certificate Authority - G2
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: REDACTED:8445
> User-Agent: curl/7.64.1
> Accept: */*
>
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 7)
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
failure
2022/02/17 21:21:06 [error] 1102#0: *7433432 [kong] response.lua:983 unable to proxy stream connection, status: 500, err: {"message":"An unexpected error occurred"} while prereading client data, client: REDACTED, server: unix:/usr/local/kong/stream_tls_terminate.sock
REDACTED [17/Feb/2022:21:21:06 +0000] TCP 500 0 0 0.048
REDACTED [17/Feb/2022:21:21:07 +0000] TCP 200 5641 271 0.101
2022/02/17 21:21:07 [crit] 1102#0: *7433432 SSL_shutdown() failed (SSL: error:14094123:SSL routines:ssl3_read_bytes:application data after close notify) while prereading client data, client: REDACTED, server: unix:/usr/local/kong/stream_tls_terminate.sock
curl -ivk --resolve REDACTED:8445
* Added REDACTED:8445:REDACTED to DNS cache
* Hostname REDACTED was found in DNS cache
* Trying REDACTED...
* TCP_NODELAY set
* Connected to REDACTED (REDACTED) port 8445 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/cert.pem
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=REDACTED
* start date: Nov 9 21:36:39 2021 GMT
* expire date: Dec 11 21:36:39 2022 GMT
* issuer: C=US; ST=Arizona; L=Scottsdale; O=GoDaddy.com, Inc.; OU=http://certs.godaddy.com/repository/; CN=Go Daddy Secure Certificate Authority - G2
* SSL certificate verify ok.
> GET / HTTP/1.1
> Host: REDACTED:8445
> User-Agent: curl/7.64.1
> Accept: */*
>
* TLSv1.2 (IN), TLS alert, close notify (256):
* Empty reply from server
* Connection #0 to host REDACTED left intact
curl: (52) Empty reply from server
* Closing connection 0
I noticed it fails possibly 100% of the time on cold requests, for instance when there's no other requests to that upstream for longer than 30s-1m. After that request fails fast, kong-proxy issues a bunch of dns inquiries and then requests after that are successful. If requests are made once per second, it seems like it doesn't need to issue more dns inquiries and requests return successfully for a length of time. It will fail again after doing that for a while, I guess when the dns records time out of cache? The way to produce the longest iteration of a lot of failures is to issue a request and wait a length of time before issuing a request again. Here's a dump of the dns traffic after a cold request fails.
ingress-kong-865bbccb7f-xbzj8.42847 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e3e -> 0x3f15!] 33600+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. (107)
ingress-kong-865bbccb7f-xbzj8.38997 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e55 -> 0xbb22!] 27542+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. (130)
ingress-kong-865bbccb7f-xbzj8.35723 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e50 -> 0x22bf!] 7797+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. (125)
ingress-kong-865bbccb7f-xbzj8.54980 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4c -> 0x4bd3!] 9222+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. (121)
ingress-kong-865bbccb7f-xbzj8.41359 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e59 -> 0xbe5d!] 11941+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. (134)
ingress-kong-865bbccb7f-xbzj8.36302 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4e -> 0xad93!] 14905+ A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.google.internal. (123)
ingress-kong-865bbccb7f-xbzj8.36873 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e3e -> 0x7b8f!] 15900+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. (107)
ingress-kong-865bbccb7f-xbzj8.43780 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e55 -> 0xdea3!] 13638+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. (130)
ingress-kong-865bbccb7f-xbzj8.51726 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e50 -> 0xbb0f!] 10145+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. (125)
ingress-kong-865bbccb7f-xbzj8.42153 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4c -> 0x8357!] 65180+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. (121)
ingress-kong-865bbccb7f-xbzj8.58055 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e59 -> 0xf252!] 47447+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. (134)
ingress-kong-865bbccb7f-xbzj8.42056 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4e -> 0xf7b3!] 47518+ SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.google.internal. (123)
ingress-kong-865bbccb7f-xbzj8.58888 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e3e -> 0x745e!] 2894+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. (107)
ingress-kong-865bbccb7f-xbzj8.56239 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e55 -> 0x3cb7!] 42659+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. (130)
ingress-kong-865bbccb7f-xbzj8.51677 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e50 -> 0x3961!] 50560+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. (125)
ingress-kong-865bbccb7f-xbzj8.43622 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4c -> 0xff3b!] 39163+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. (121)
ingress-kong-865bbccb7f-xbzj8.54031 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e59 -> 0x7d28!] 15958+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. (134)
ingress-kong-865bbccb7f-xbzj8.45193 > kube-dns.kube-system.svc.cluster.local.53: [bad udp cksum 0x1e4e -> 0xfea0!] 49776+ CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.google.internal. (123)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.43622: [udp sum ok] 39163 NXDomain q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (214)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.43780: [udp sum ok] 13638 NXDomain q: SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (223)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.58888: [udp sum ok] 2894* q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (161)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.38997: [udp sum ok] 27542 NXDomain q: A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (223)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.36873: [udp sum ok] 15900* q: SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. 1/0/1 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. [30s] SRV 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.:0 10 100 ar: 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. [30s] A 10.192.0.230 (232)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.51677: [udp sum ok] 50560 NXDomain q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (218)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.35723: [udp sum ok] 7797 NXDomain q: A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (218)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.56239: [udp sum ok] 42659 NXDomain q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.kong.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (223)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.42847: [udp sum ok] 33600* q: A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. 1/0/0 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local. [30s] A 10.192.0.230 (123)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.42153: [udp sum ok] 65180 NXDomain q: SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (214)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.54980: [udp sum ok] 9222 NXDomain q: A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (214)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.51726: [udp sum ok] 10145 NXDomain q: SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.svc.cluster.local. 0/1/0 ns: cluster.local. [1m] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1645142400 28800 7200 604800 60 (218)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.58055: [udp sum ok] 47447 NXDomain q: SRV? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. 0/1/0 ns: internal. [30s] SOA ns.global.gcedns-prod.internal. cloud-dns-hostmaster.google.com. 2015030600 7200 3600 24796800 5 (223)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.45193: [udp sum ok] 49776 NXDomain q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.google.internal. 0/1/0 ns: internal. [30s] SOA ns.global.gcedns-prod.internal. cloud-dns-hostmaster.google.com. 2015030600 7200 3600 24796800 5 (212)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.54031: [udp sum ok] 15958 NXDomain q: CNAME? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. 0/1/0 ns: internal. [30s] SOA ns.global.gcedns-prod.internal. cloud-dns-hostmaster.google.com. 2015030600 7200 3600 24796800 5 (223)
kube-dns.kube-system.svc.cluster.local.53 > ingress-kong-865bbccb7f-xbzj8.41359: [udp sum ok] 11941 NXDomain q: A? 6632396539306533.service-core-bank-gateway-tls-passthrough.core-banking.svc.cluster.local.c.figure-pay-test.internal. 0/1/0 ns: internal. [30s] SOA ns.global.gcedns-prod.internal. cloud-dns-hostmaster.google.com. 2015030600 7200 3600 24796800 5 (223)
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This still seems like a problem
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I haven't tested this again on kong newer than 2.7.0
. I ended up not leveraging this feature for now and used dedicated ingresses to route to the low number of services we needed this for.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
marking this as pinned to avoid getting recycled by stalebot
@randmonkey please create a work item with AC for the work that's necessary to do here
@randmonkey @fffonion Do we have any update for this one?
This issue is marked as stale because it has been open for 14 days with no activity.