external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

Add option to filter private/public IP with Oracle OCI Network Load Balancer

Open IvanI3 opened this issue 2 years ago • 7 comments

What would you like to be added: Add option to filter private/public IP when multiple IPs are provided by cloud load balancer

Why is this needed: When use public Network Load Balancer in Oracle OCI environment it creates 2 IPs for NLB: private and public which mapped to service/ingress:

apiVersion: v1
kind: Service
metadata:
  name: ingress-public-ingress-nginx-controller
  annotations:
    oci-network-load-balancer.oraclecloud.com/internal: "false"
    oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape: flexible
    oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape-flex-max: 10Mbps
    oci-network-load-balancer.oraclecloud.com/oci-load-balancer-shape-flex-min: 10Mbps
    oci.oraclecloud.com/load-balancer-type: nlb
...
spec:
  type: LoadBalancer
...
status:
  loadBalancer:
    ingress:
    - ip: 130.x.x.x
    - ip: 10.x.x.x

and the same for Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    external-dns.alpha.kubernetes.io/access: public
    external-dns.alpha.kubernetes.io/cloudflare-proxied: "false"
    external-dns.xxx.io/public: enable
  name: ingress-public-httpd
spec:
  ingressClassName: nginx-public
  rules:
  - host: www.example.com
    http:
      paths:
      - backend:
          service:
            name: nginx-service
            port:
              number: 80
        path: /
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 10.x.x.x
    - ip: 130.x.x.x

So, external-dns adds both IPs as A-records to the zone, and it makes this config unusable Tested with "service" and "ingress" as source. "external-dns.alpha.kubernetes.io/access" annotation also cheked - not helped.

Please, make "access" annotation works for this situation, or add another option to control what ip private or public to choose.

IvanI3 avatar Jun 14 '22 05:06 IvanI3

I am also encountering this problem where LoadBalancer type services created in OCI with the annotation oci.oraclecloud.com/load-balancer-type=nlb end up with their internal private IPv4 address appended to the "External IPs" and thus the A record created by external-dns.

Should this be something potentially discussed upstream at oci-cloud-controller-manager?

I think it's fair for external-dns to assume that the Ingress IPs property only contains valid public or private IPv4 addresses not both.

Matthew-Beckett avatar Jul 04 '22 03:07 Matthew-Beckett

Hi, if I understand your usecase correctly you might want to have a look at https://github.com/kubernetes-sigs/external-dns/pull/2693. Does this solve your issue?

tobikris avatar Sep 16 '22 21:09 tobikris

Yes, it could solve the issue, but check needed - this option still not in current release 0.12.2

IvanI3 avatar Sep 19 '22 12:09 IvanI3

Yes, unfortunately not. I am currently using

    image:
      repository: gcr.io/k8s-staging-external-dns/external-dns
      tag: v20220802-v0.12.2-7-ge2b86a11

until it is released.

tobikris avatar Sep 19 '22 13:09 tobikris

Yes, I could confirm it works as expected with this image and new option. Waiting for release... Thanks!

IvanI3 avatar Sep 19 '22 13:09 IvanI3

Hi, if I understand your usecase correctly you might want to have a look at https://github.com/kubernetes-sigs/external-dns/pull/2693. Does this solve your issue?

Thank you @tobikris for your contribution to work around this issue, this should more than cover working around this issue in OCI.

Though, for any Oracle engineer reading this, please don't let @tobikris' work detract from you fixing the skewed behaviour at source, this is a good patch to work around the issue, but not a permanent fix, I don't want to have to put RFC1918 addresses in every OCI external-dns deployment I have because Oracle are deaf.

Matthew-Beckett avatar Sep 19 '22 13:09 Matthew-Beckett

image: repository: gcr.io/k8s-staging-external-dns/external-dns tag: "v20220802-v0.12.2-7-ge2b86a11" ... extraArgs:

  • --exclude-target-net=10.1.251.0/24 ###NLB Private Subnet

hoanbc avatar Sep 23 '22 14:09 hoanbc

It would make more sense if the default behavior of external-dns is to prefer non RFC1918 addresses when multiple addresses are available. The net-filter args will not work if you want to use both private and public NLBs.

ooraini avatar Oct 04 '22 17:10 ooraini

diff --git a/source/service.go b/source/service.go
index 9c47579d..81b856db 100644
--- a/source/service.go
+++ b/source/service.go
@@ -19,6 +19,7 @@ package source
 import (
        "context"
        "fmt"
+       "net"
        "sort"
        "strings"
        "text/template"
@@ -529,6 +530,7 @@ func extractServiceExternalName(svc *v1.Service) endpoint.Targets {

 func extractLoadBalancerTargets(svc *v1.Service) endpoint.Targets {
        var (
+               ip4target   string
                targets     endpoint.Targets
                externalIPs endpoint.Targets
        )
@@ -536,13 +538,21 @@ func extractLoadBalancerTargets(svc *v1.Service) endpoint.Targets {
        // Create a corresponding endpoint for each configured external entrypoint.
        for _, lb := range svc.Status.LoadBalancer.Ingress {
                if lb.IP != "" {
-                       targets = append(targets, lb.IP)
+                       if ip := net.ParseIP(lb.IP); ip != nil && len(ip.To4()) == net.IPv4len {
+                               if ip4target == "" || !ip.IsPrivate() {
+                                       ip4target = ip.String()
+                               }
+                       } else {
+                               targets = append(targets, lb.IP)
+                       }
                }
                if lb.Hostname != "" {
                        targets = append(targets, lb.Hostname)
                }
        }

+       targets = append(targets, ip4target)
+
        if svc.Spec.ExternalIPs != nil {
                for _, ext := range svc.Spec.ExternalIPs {
                        externalIPs = append(externalIPs, ext)

ooraini avatar Oct 04 '22 17:10 ooraini

It would make more sense if the default behavior of external-dns is to prefer non RFC1918 addresses when multiple addresses are available. The net-filter args will not work if you want to use both private and public NLBs.

My idea is that you can always use multiple instances of external-dns with differing filters for specific cases. I think that your mentioned default behavior is actually more limited than my proposed implementation.

tobikris avatar Oct 04 '22 21:10 tobikris

@tobikris

Let's focus on the problem for a minute. What is the correct behavior for external-dns when there are multiple targets(service, ingress, ..etc) associated with an A record? Now, attempting to create A records with same name and different IPs means different things for different DNS servers(or providers). It also may just fail(The case for OCI).

Suppose we assume that such usage is within externa-dns scope. Then, what behavior should the user expect? Load balancing? round robin? or just fail? Right now it depends on the provider.

I don't think external-dns should concern itself with DNS traffic management or even attempt to support it(my opinion)

Yes, you could deploy the multiple external-dns instance, but that will increase the complexity as you would have to select an external-dns instance using filters(annotations, labels). I don't think it's worth it.

Your solution can be used as workaround for this problem, but it's feature on it's own.

Let's get back to original question. What should external-dns do in this situation? I propose the good default of using the public IP. This will work for 99% of people. The other option is an annotation:

external-dns.alpha.kubernetes.io/preferred-ip: 'public' # or 'private'

Side note: From the code I see that you can't use net-filter for Ingress object. Is this intentional?

ooraini avatar Oct 05 '22 17:10 ooraini

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 03 '23 17:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 02 '23 18:02 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Mar 04 '23 19:03 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Mar 04 '23 19:03 k8s-ci-robot