ingress-nginx icon indicating copy to clipboard operation
ingress-nginx copied to clipboard

Exposing same set of services via public and internal load balancers using a single ingress controller (split-view DNS setup)

Open calexandre opened this issue 3 years ago • 13 comments

I have a solution on GKE that requires a split-view DNS setup as pointed here.

Basically I have two identical DNS zones:

  • public dns: foo-zone.company.org - this is public DNS zone which contains public (internet) resolvable DNS records
  • private dns: foo-zone.company.org - this is a private DNS zone which contains VPC/private resolvable DNS records

Please note that I don't plan to expose different services across the External and Internal LB as pointed out on #6138; I am aware that if I want to go through that route, I will need two instances of the controller.

I'm using the helm chart and was able to deploy a single nginx-ingress-controller instance with both the External and Internal load-balancers using the values.yaml below:

service:
  external:
    enabled: true
  internal:
    enabled: true
    annotations:
      networking.gke.io/load-balancer-type: "Internal"

My problem/question

When I deployed my first ingress resource, it was only assigned the IP of the External LB. I guess this was expected...But:

  • How can I deploy a ingress resource that also exposes the same host, but for the Internal LB?
  • What is the recommended procedure to ensure that in this setup, from a DNS perspective, a single ingress resource works for the split-view setup as described here?

I plan to run two instances of external-dns (one for the public zone, and the other for the private zone). And both nginx and external-dns describe the ability to work with a split-view DNS setup, but I'm finding it hard to glue these two components together, and yet, I feel that I'm so close 😢

  • I can make this work if I hardcode the Internal LB's IP into the private DNS zone, but I'm trying to avoid that route to make the solution as much robust as possible - hence the "private" external-dns deployment to handle those records.

This is the ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app
  namespace: app
spec:
  ingressClassName: nginx
  rules:
    - host: app.foo-zone.company.org
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app-service
                port:
                  number: 80

/triage support

calexandre avatar Feb 12 '22 11:02 calexandre

/triage support

calexandre avatar Feb 12 '22 11:02 calexandre

@calexandre: The label(s) triage/support cannot be applied, because the repository doesn't have them.

In response to this:

/triage support

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 12 '22 11:02 k8s-ci-robot

You have correctly identified explicit real bare bones simple example for what you described. I don't know a solution at this moment but I think it will have to be gathered by knowing what other users of the same scene are doing. There was a PR not so long ago, that actually addressed the "ipaddress config" part of it. I will have to check which one.

On Sat, Feb 12, 2022 at 5:10 PM Kubernetes Prow Robot < @.***> wrote:

@calexandre https://github.com/calexandre: The label(s) triage/support cannot be applied, because the repository doesn't have them.

In response to this https://github.com/kubernetes/ingress-nginx/issues/8239#issuecomment-1037167744 :

/triage support

Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.

— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8239#issuecomment-1037167762, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWQIHFNR77ZQEAY3V6TU2ZBMDANCNFSM5OGVXVOA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

You are receiving this because you are subscribed to this thread.Message ID: @.***>

-- ; Long Wu Yuan

longwuyuan avatar Feb 12 '22 17:02 longwuyuan

I meant to say "lack" of an example or documentation.

/kind documentation /area docs /triage-accepted

longwuyuan avatar Feb 12 '22 17:02 longwuyuan

@indhupriya I can pair with you, if you want to look at this.

longwuyuan avatar Feb 12 '22 17:02 longwuyuan

sure, @longwuyuan I can work on this.

indhupriya avatar Feb 12 '22 17:02 indhupriya

Thank you @longwuyuan and @indhupriya for looking into this. In the meantime, does anyone have a clue how to achieve this setup? I searched through the issues, and I only find results where people are using two distinct instances of the nginx-ingress-controller: one for internal and another for external.

calexandre avatar Feb 12 '22 23:02 calexandre

/triage accepted /priority backlog

strongjz avatar Feb 15 '22 17:02 strongjz

Hello, I'd like to give an update regarding the compromise I reached until we figure out a more elegant way. Just to make things clear, like I described above, I'm using the following architecture:

  • I'm using two external-dns instances and a single nginx-ingress-controller instance.
  • I will refer to external-dns-private as the external-dns instance that handles the private DNS zone records
  • I will refer to external-dns-public as the external-dns instance that handles the public DNS zone records
  • This is all built on GCP/GKE.

What did worked

  • I was able make external-dns-private write the nginx Internal LB ip-address automatically on the private dns-zone
  • This was achieved by annotating the nginx internal service (controller.service.internal.annotations) with this external-dns annotation.

The current nginx helm-chart configuration is as follows:

controller:
  service:
    enableHttp: true
    enableHttps: true
    externalTrafficPolicy: Local
    external:
      enabled: true
    internal:
      enabled: true
      annotations:
        networking.gke.io/load-balancer-type: "Internal"
        external-dns.alpha.kubernetes.io/hostname: "*.company.org"

Of course, this also required some tweaking on the external-dns-private instance aswell:

  • I needed to make sure that the external-dns-private instance would not overlap with its external-dns-public sibling, so I ensured that the private one would only work the the Kind: service while the public one would works with the Kind: ingress.
# external-dns-public
sources:
- ingress

extraArgs:
- --google-zone-visibility=public

---

# external-dns-private
sources:
- service

extraArgs:
- --google-zone-visibility=private

The end result was satisfying enough because I didn't need to fix the Internal load-balancer's IP nor hardcode it into the private DNS zone.

What's still missing

  • Still unable to work with ingress generated host records on the Internal DNS zone
  • Currently I rely on a wildcard record, which is not desirable on my setup

calexandre avatar Feb 16 '22 15:02 calexandre

Thanks a lot for your feedback on this issue @calexandre , I was currently trying to achieve exactly the same thing for an use-case I have and I was really struggling. Your workaround works nicely for me too.

However as mentioned, this is far from being ideal because of the wildcarding, and indeed it would be much more easier to only have to think about the ingress objects.

Basically, with the controller.service.internal.enabled feature available, I would expect the ingress to get both external and internal load-balancers url under the status.loadBalancer.ingress array, but I actually only see the external one.

I guess getting this working as we expect would require some changes on both ingress-nginx and external-dns controllers, but we could imagine having a new annotation on the Ingress object like nginx.ingress.kubernetes.io/use-internal: true that would make the ingress-nginx controller assign both external and internal loadbalancers to the ingress, and on the external-dns side they could use it to make the controller with the arg --aws-zone-type=private/--google-zone-visibility=private create the host record in the private zone. Or having to set 2 annotations, one specific to ingress-gninx and one specific to external-dns, I guess that would make more sense to keep both projects completely independant.

Would that be doable somehow or is it a false good idea ?

cebidhem avatar Mar 30 '22 18:03 cebidhem

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 28 '22 19:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 28 '22 20:07 k8s-triage-robot

/remove-lifecycle rotten

The issue still makes sense to me as it is a real use-case.

cebidhem avatar Jul 31 '22 19:07 cebidhem

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 29 '22 20:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 28 '22 20:11 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Dec 28 '22 21:12 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Dec 28 '22 21:12 k8s-ci-robot