ingress-nginx
ingress-nginx copied to clipboard
Exposing same set of services via public and internal load balancers using a single ingress controller (split-view DNS setup)
I have a solution on GKE that requires a split-view DNS setup as pointed here.
Basically I have two identical DNS zones:
- public dns:
foo-zone.company.org
- this is public DNS zone which contains public (internet) resolvable DNS records - private dns:
foo-zone.company.org
- this is a private DNS zone which contains VPC/private resolvable DNS records
Please note that I don't plan to expose different services across the External and Internal LB as pointed out on #6138; I am aware that if I want to go through that route, I will need two instances of the controller.
I'm using the helm chart and was able to deploy a single nginx-ingress-controller
instance with both the External and Internal load-balancers using the values.yaml
below:
service:
external:
enabled: true
internal:
enabled: true
annotations:
networking.gke.io/load-balancer-type: "Internal"
My problem/question
When I deployed my first ingress resource, it was only assigned the IP of the External LB. I guess this was expected...But:
- How can I deploy a ingress resource that also exposes the same host, but for the Internal LB?
- What is the recommended procedure to ensure that in this setup, from a DNS perspective, a single ingress resource works for the
split-view
setup as described here?
I plan to run two instances of external-dns
(one for the public zone, and the other for the private zone). And both nginx
and external-dns
describe the ability to work with a split-view
DNS setup, but I'm finding it hard to glue these two components together, and yet, I feel that I'm so close 😢
- I can make this work if I hardcode the Internal LB's IP into the private DNS zone, but I'm trying to avoid that route to make the solution as much robust as possible - hence the "private"
external-dns
deployment to handle those records.
This is the ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: app
spec:
ingressClassName: nginx
rules:
- host: app.foo-zone.company.org
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-service
port:
number: 80
/triage support
/triage support
@calexandre: The label(s) triage/support
cannot be applied, because the repository doesn't have them.
In response to this:
/triage support
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
You have correctly identified explicit real bare bones simple example for what you described. I don't know a solution at this moment but I think it will have to be gathered by knowing what other users of the same scene are doing. There was a PR not so long ago, that actually addressed the "ipaddress config" part of it. I will have to check which one.
On Sat, Feb 12, 2022 at 5:10 PM Kubernetes Prow Robot < @.***> wrote:
@calexandre https://github.com/calexandre: The label(s) triage/support cannot be applied, because the repository doesn't have them.
In response to this https://github.com/kubernetes/ingress-nginx/issues/8239#issuecomment-1037167744 :
/triage support
Instructions for interacting with me using PR comments are available here https://git.k8s.io/community/contributors/guide/pull-requests.md. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue: repository.
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/ingress-nginx/issues/8239#issuecomment-1037167762, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABGZVWQIHFNR77ZQEAY3V6TU2ZBMDANCNFSM5OGVXVOA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.
You are receiving this because you are subscribed to this thread.Message ID: @.***>
-- ; Long Wu Yuan
I meant to say "lack" of an example or documentation.
/kind documentation /area docs /triage-accepted
@indhupriya I can pair with you, if you want to look at this.
sure, @longwuyuan I can work on this.
Thank you @longwuyuan and @indhupriya for looking into this.
In the meantime, does anyone have a clue how to achieve this setup?
I searched through the issues, and I only find results where people are using two distinct instances of the nginx-ingress-controller
: one for internal
and another for external
.
/triage accepted /priority backlog
Hello, I'd like to give an update regarding the compromise I reached until we figure out a more elegant way. Just to make things clear, like I described above, I'm using the following architecture:
- I'm using two
external-dns
instances and a singlenginx-ingress-controller
instance. - I will refer to
external-dns-private
as theexternal-dns
instance that handles the private DNS zone records - I will refer to
external-dns-public
as theexternal-dns
instance that handles the public DNS zone records - This is all built on GCP/GKE.
What did worked
- I was able make
external-dns-private
write thenginx
Internal LB ip-address automatically on the private dns-zone - This was achieved by annotating the
nginx
internal service (controller.service.internal.annotations
) with this external-dns annotation.
The current nginx
helm-chart configuration is as follows:
controller:
service:
enableHttp: true
enableHttps: true
externalTrafficPolicy: Local
external:
enabled: true
internal:
enabled: true
annotations:
networking.gke.io/load-balancer-type: "Internal"
external-dns.alpha.kubernetes.io/hostname: "*.company.org"
Of course, this also required some tweaking on the external-dns-private
instance aswell:
- I needed to make sure that the
external-dns-private
instance would not overlap with itsexternal-dns-public
sibling, so I ensured that the private one would only work the theKind: service
while the public one would works with theKind: ingress
.
# external-dns-public
sources:
- ingress
extraArgs:
- --google-zone-visibility=public
---
# external-dns-private
sources:
- service
extraArgs:
- --google-zone-visibility=private
The end result was satisfying enough because I didn't need to fix the Internal load-balancer's IP nor hardcode it into the private DNS zone.
What's still missing
- Still unable to work with ingress generated host records on the Internal DNS zone
- Currently I rely on a wildcard record, which is not desirable on my setup
Thanks a lot for your feedback on this issue @calexandre , I was currently trying to achieve exactly the same thing for an use-case I have and I was really struggling. Your workaround works nicely for me too.
However as mentioned, this is far from being ideal because of the wildcarding, and indeed it would be much more easier to only have to think about the ingress objects.
Basically, with the controller.service.internal.enabled
feature available, I would expect the ingress to get both external and internal load-balancers url under the status.loadBalancer.ingress
array, but I actually only see the external one.
I guess getting this working as we expect would require some changes on both ingress-nginx and external-dns controllers, but we could imagine having a new annotation on the Ingress object like nginx.ingress.kubernetes.io/use-internal: true
that would make the ingress-nginx controller assign both external and internal loadbalancers to the ingress, and on the external-dns side they could use it to make the controller with the arg --aws-zone-type=private
/--google-zone-visibility=private
create the host record in the private zone.
Or having to set 2 annotations, one specific to ingress-gninx and one specific to external-dns, I guess that would make more sense to keep both projects completely independant.
Would that be doable somehow or is it a false good idea ?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The issue still makes sense to me as it is a real use-case.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.