dns
dns copied to clipboard
External-IP assigned to cluster IP resolves to K8S service fqdn instead of hostmachine's fqdn
I have a created an nginx pod and nginx clusterIP service and assign an externalIP to that service like below. This is because, we want to access the nginx web server from outside the world. If I am using nodePort type of service, it gives me only highports where I can access the nginx (like 30000 etc..) but I want to access the nginx on the same port (i.e 443, 80 ). By assigning an external IP to this clusterIP type service, I am able to access the nginx in the same port using my host machine IP.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-nginx ClusterIP 10.110.93.251 192.168.0.10 443/TCP,80/TCP,8000/TCP,5443/TCP 79m
In one of my application pod, I am trying to execute below command and get the fqdn of it.
>>> import socket
>>> socket.getfqdn('192.168.0.10')
'test-nginx.test.svc.cluster.local'
Is there any way to block the DNS resolution of external-IP so that, when I try to resolve external-IP, it resolves to the host machine fqdn. Or is there any other workaround to do this?
Kubernetes version: GitVersion:"v1.21.1" coredns version: k8s.gcr.io/coredns/coredns:v1.8.0
This might be a bug in CoreDNS, I'll look into it.
A work around for now can be to define the reverse zones in CoreDNS's kubernetes
config to more closely encompass just the Service/Pod IP subnets. If your Service/Pod IP subnets are known then this should work.
e.g. if all service IPs fall in 10.0.0.0/8, and all Pod IPs fall in 172.0.0.0/8 ...
kubernetes cluster.local 10.in-addr.arpa 172.in-addr.arpa {
pods insecure
fallthrough in-addr.arpa
ttl 30
}
Regarding the workaround example above: If Node Local DNS is deployed, it would be better to put the tailored-down reverse zones in the Node Local DNS Corefile. That way reverse lookups that don't fall in those ranges will get sent straight upstream instead of being routed through the CoreDNS Cluster Discovery Service.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
This issue was fixed in CoreDNS with https://github.com/coredns/coredns/pull/5435, released in CoreDNS 1.9.4.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen /remove-lifecycle rotten
I want to test and upgrade the dependencies to use CoreDNS 1.10.0
@dpasiukevich: Reopened this issue.
In response to this:
/reopen /remove-lifecycle rotten
I want to test and upgrade the dependencies to use CoreDNS 1.10.0
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue was fixed in CoreDNS with https://github.com/coredns/coredns/pull/5435, released in CoreDNS 1.9.4.
I've upgraded the dependency to coredns 1.10.0 which should solve the issue