external-dns
external-dns copied to clipboard
feat: SRV record on headless service
Description
Support creating SRV records in headless services (ClusterIP).
SRV record form: _container-port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example
target form: pod-hostname.my-svc.my-namespace.svc.cluster-domain.example
SRV record:
_container-port-name._port-protocol.my-svc.my-namespace.svc.cluster-domain.example 0 IN SRV 0 50 container-port pod-hostname.my-svc.my-namespace.svc.cluster-domain.example
This is referenced from https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records. However, I have notice it does not quite align with the convention of external-dns. Can anyone tell me the properly form of SRV record in the context of external-dns?
I have identified that the test needs to be updated, happy to make update test after someone take a quick look at my PR.
An example: namespace: default service name: cassandra hostname (annotation): example.org
pod1: cassandra-0
container port/name/protocol: 7000, intra-node, TCP, 7001,tls-intra-node, TCP ,7199, jmx, TCP, 9042, cql, TCP
pod2: cassandra-1
container port/name/protocol: 7000, intra-node, TCP, 7001,tls-intra-node, TCP ,7199, jmx, TCP, 9042, cql, TCP
SRV records created:
_cql._tcp.cassandra.default.svc.example.org 0 IN SRV 0 50 9042 cassandra-0.cassandra.default.svc.exa
mple.org;0 50 9042 cassandra-1.cassandra.default.svc.example.org []
_intra-node._tcp.cassandra.default.svc.example.org 0 IN SRV 0 50 7000 cassandra-0.cassandra.default.
svc.example.org;0 50 7000 cassandra-1.cassandra.default.svc.example.org []
_jmx._tcp.cassandra.default.svc.example.org 0 IN SRV 0 50 7199 cassandra-0.cassandra.default.svc.exa
mple.org;0 50 7199 cassandra-1.cassandra.default.svc.example.org []
_tls-intra-node._tcp.cassandra.default.svc.example.org 0 IN SRV 0 50 7001 cassandra-0.cassandra.defa
ult.svc.example.org;0 50 7001 cassandra-1.cassandra.default.svc.example.org []
Fixes #3993
Checklist
- [ ] Unit tests updated
- [ ] End user documentation updated
The committers listed above are authorized under a signed CLA.
- :white_check_mark: login: theloneexplorerquest (5f735e4078b973cad075b41637911cfc376615ad, 5f388633edc64c0a2aa1353420d25b22afe38e2a, 84003e0a29cd9f5d2c900edef2a4715018eaf69a, 5bf1259e03835de1afb3208db811ba0c33951f83, 6dcad27d36801cb9726b24177e13057794cf23cd)
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: Once this PR has been reviewed and has the lgtm label, please assign szuecs for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
Welcome @theloneexplorerquest!
It looks like this is your first PR to kubernetes-sigs/external-dns 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/external-dns has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @theloneexplorerquest. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
happy to update PR for test/doc/conflict once the form of SRV record in headless service has been confirmed.
@theloneexplorerquest For SRV record, there is a PR aiming to change it, see #4001. It's also using port name for the SRV record.
Correct me if I'm wrong, but it seems this kind of SRV record is already created by Kubernetes. The SRV record format you suggest is interesting mainly inside a k8s cluster.
External-DNS is about creating DNS records outside the k8s cluster.
Would you please detail your use case ?
@theloneexplorerquest For SRV record, there is a PR aiming to change it, see #4001. It's also using port name for the SRV record.
Correct me if I'm wrong, but it seems this kind of SRV record is already created by Kubernetes. The SRV record format you suggest is interesting mainly inside a k8s cluster.
External-DNS is about creating DNS records outside the k8s cluster.
Would you please detail your use case ?
I could be wrong here, disclaim I am not using this feature. Just wanna to contribute the project to learning propose, will take a look!
I opened the initial request for this in #3993. The request is different to #4001 - that PR changes the behaviour of SRV records for NodePort services, I'm interested in creating SRV records for headless services. AFAIK, only A and AAAA records are created for headless services at the moment.
The first service I'd like to use this for is etcd, which supports the client discovering the server SRV records, so I'd like to see the following records created if my service is configured with a hostname of example.com and has a port named etcd-client-ssl:
- Type
AorAAAA,etcd-${i}.example.com, for each pod${i}in my service. This is already supported by external-dns. - Type
SRV,_etcd-client-ssl._tcp.example.com, pointing to the port and the name of theA/AAAArecord created above.
I think this could also be useful for LoadBalancer type services, as mentioned by @melnikovx, but I don't have an immediate use-case for that.
@mloiseleur do you think above use case is justified? Happy to progress further on this PR :smiley:
Looking again at kubernetes doc on headless service, it does not says anything about SRV records. Looking at this comment on a kubernetes issue, it seems it's already possible without external-dns for network inside the cluster.
At the end, TBH, I do not have a strong opinion on this matter. As long as it can be useful for some use case and it's not breaking or overcomplexify source code, why not ?
cc @johngmyers @Raffo @szuecs
My use case is far less technical. I just want to create SRV records for minecraft servers I'm hosting on my k8s cluster. Basically you need an A or CNAME record for the hostname, but you also need a supporting SRV record to map the port properly.
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: mc-cname
spec:
endpoints:
- dnsName: mc.domain.tld
recordType: CNAME
targets: ["ipv4.domain.tld"]
---
apiVersion: externaldns.k8s.io/v1alpha1
kind: DNSEndpoint
metadata:
name: srv-record
spec:
endpoints:
- dnsName: _minecraft._tcp
recordTTL: 300
recordType: SRV
targets:
- "10 5 25565 mc.domain.tld"
https://www.namecheap.com/support/knowledgebase/article.aspx/9765/2208/how-can-i-link-my-domain-name-to-a-minecraft-server/
My use case is to use a Rook Ceph cluster to look up the monitor IP addresses via SRV DNS entries.
/ok-to-test
@theloneexplorerquest Do you think you can rebase this PR ?
@theloneexplorerquest test and documentation are needed on this PR
/retitle feat: SRV record on headless service
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
I don't mind it but:
- [ ] add a flag to enable this, because of big clusters and you create overhead for everyone to support a very specific use case
- [ ] every feature comes with test
- [ ] every feature comes with docs
I don't mind it but:
- [ ] add a flag to enable this, because of big clusters and you create overhead for everyone to support a very specific use case
- [ ] every feature comes with test
- [ ] every feature comes with docs
Hi @szuecs thanks for review. I will update this PR after when I am free
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.