external-dns
external-dns copied to clipboard
Add option to filter private/public IP with Oracle OCI Network Load Balancer
What would you like to be added: Add option to filter private/public IP when multiple IPs are provided by cloud load balancer
Why is this needed: When use public Network Load Balancer in Oracle OCI environment it creates 2 IPs for NLB: private and public which mapped to service/ingress:
apiVersion: v1
kind: Service
metadata:
name: ingress-public-ingress-nginx-controller
annotations:
oci-network-load-balancer.oraclecloud.com/internal: "false"
oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape: flexible
oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape-flex-max: 10Mbps
oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape-flex-min: 10Mbps
oci.oraclecloud.com/load-balancer-type: nlb
...
spec:
type: LoadBalancer
...
status:
loadBalancer:
ingress:
- ip: 130.x.x.x
- ip: 10.x.x.x
and the same for Ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
external-dns.alpha.kubernetes.io/access: public
external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
external-dns.xxx.io/public: enable
name: ingress-public-httpd
spec:
ingressClassName: nginx-public
rules:
- host: www.example.com
http:
paths:
- backend:
service:
name: nginx-service
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer:
ingress:
- ip: 10.x.x.x
- ip: 130.x.x.x
So, external-dns adds both IPs as A-records to the zone, and it makes this config unusable Tested with "service" and "ingress" as source. "external-dns.alpha.kubernetes.io/access" annotation also cheked - not helped.
Please, make "access" annotation works for this situation, or add another option to control what ip private or public to choose.
I am also encountering this problem where LoadBalancer
type services created in OCI with the annotation oci.oraclecloud.com/load-balancer-type=nlb
end up with their internal private IPv4 address appended to the "External IPs" and thus the A record created by external-dns.
Should this be something potentially discussed upstream at oci-cloud-controller-manager?
I think it's fair for external-dns to assume that the Ingress IPs property only contains valid public or private IPv4 addresses not both.
Hi, if I understand your usecase correctly you might want to have a look at https://github.com/kubernetes-sigs/external-dns/pull/2693. Does this solve your issue?
Yes, it could solve the issue, but check needed - this option still not in current release 0.12.2
Yes, unfortunately not. I am currently using
image:
repository: gcr.io/k8s-staging-external-dns/external-dns
tag: v20220802-v0.12.2-7-ge2b86a11
until it is released.
Yes, I could confirm it works as expected with this image and new option. Waiting for release... Thanks!
Hi, if I understand your usecase correctly you might want to have a look at https://github.com/kubernetes-sigs/external-dns/pull/2693. Does this solve your issue?
Thank you @tobikris for your contribution to work around this issue, this should more than cover working around this issue in OCI.
Though, for any Oracle engineer reading this, please don't let @tobikris' work detract from you fixing the skewed behaviour at source, this is a good patch to work around the issue, but not a permanent fix, I don't want to have to put RFC1918 addresses in every OCI external-dns deployment I have because Oracle are deaf.
image: repository: gcr.io/k8s-staging-external-dns/external-dns tag: "v20220802-v0.12.2-7-ge2b86a11" ... extraArgs:
- --exclude-target-net=10.1.251.0/24 ###NLB Private Subnet
It would make more sense if the default behavior of external-dns is to prefer non RFC1918 addresses when multiple addresses are available. The net-filter
args will not work if you want to use both private and public NLBs.
diff --git a/source/service.go b/source/service.go
index 9c47579d..81b856db 100644
--- a/source/service.go
+++ b/source/service.go
@@ -19,6 +19,7 @@ package source
import (
"context"
"fmt"
+ "net"
"sort"
"strings"
"text/template"
@@ -529,6 +530,7 @@ func extractServiceExternalName(svc *v1.Service) endpoint.Targets {
func extractLoadBalancerTargets(svc *v1.Service) endpoint.Targets {
var (
+ ip4target string
targets endpoint.Targets
externalIPs endpoint.Targets
)
@@ -536,13 +538,21 @@ func extractLoadBalancerTargets(svc *v1.Service) endpoint.Targets {
// Create a corresponding endpoint for each configured external entrypoint.
for _, lb := range svc.Status.LoadBalancer.Ingress {
if lb.IP != "" {
- targets = append(targets, lb.IP)
+ if ip := net.ParseIP(lb.IP); ip != nil && len(ip.To4()) == net.IPv4len {
+ if ip4target == "" || !ip.IsPrivate() {
+ ip4target = ip.String()
+ }
+ } else {
+ targets = append(targets, lb.IP)
+ }
}
if lb.Hostname != "" {
targets = append(targets, lb.Hostname)
}
}
+ targets = append(targets, ip4target)
+
if svc.Spec.ExternalIPs != nil {
for _, ext := range svc.Spec.ExternalIPs {
externalIPs = append(externalIPs, ext)
It would make more sense if the default behavior of external-dns is to prefer non RFC1918 addresses when multiple addresses are available. The
net-filter
args will not work if you want to use both private and public NLBs.
My idea is that you can always use multiple instances of external-dns with differing filters for specific cases. I think that your mentioned default behavior is actually more limited than my proposed implementation.
@tobikris
Let's focus on the problem for a minute. What is the correct behavior for external-dns when there are multiple targets(service, ingress, ..etc) associated with an A record? Now, attempting to create A records with same name and different IPs means different things for different DNS servers(or providers). It also may just fail(The case for OCI).
Suppose we assume that such usage is within externa-dns scope. Then, what behavior should the user expect? Load balancing? round robin? or just fail? Right now it depends on the provider.
I don't think external-dns should concern itself with DNS traffic management or even attempt to support it(my opinion)
Yes, you could deploy the multiple external-dns instance, but that will increase the complexity as you would have to select an external-dns instance using filters(annotations, labels). I don't think it's worth it.
Your solution can be used as workaround for this problem, but it's feature on it's own.
Let's get back to original question. What should external-dns do in this situation? I propose the good default of using the public IP. This will work for 99% of people. The other option is an annotation:
external-dns.alpha.kubernetes.io/preferred-ip: 'public' # or 'private'
Side note: From the code I see that you can't use net-filter
for Ingress object. Is this intentional?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.