ingress-nginx
ingress-nginx copied to clipboard
Add `grpc_ssl_certificate*` nginx config directives when using GRPCS …
Add grpc_ssl_certificate* nginx config directives when using GRPCS and proxy-ssl-secret annotation.
This allows using client-side mTLS authentication with backend servers.
What this PR does / why we need it:
It was not possible to use GRPCS backend protocol and enable client-side mTLS authentication (see https://github.com/kubernetes/ingress-nginx/issues/7082, https://github.com/kubernetes/ingress-nginx/issues/6893). This is caused by the fact that for gRPC nginx expects a specific configuration entry to specify the client certificate (see http://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_ssl_certificate).
The proposed workaround in the linked issues will effectively directly forward the connection on TCP level to the pod. This PR will generate the required settings in case a proxy-ssl-secret annotation was used and the protocol is gRPC. Which allows using mTLS with a backend server.
Types of changes
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Documentation only
Which issue/s this PR fixes
#6893 #7082
How Has This Been Tested?
Deployed an mTLS enabled gRPC server in a cluster. Used grpcurl with and without the change. Checked that without the change the server will terminate the connection. With the change, a client certificate is presented and the connection worked as expected.
I could not find an easy way to extend the e2e test suite, as it would require a grpcbin with enabled mTLS authentication.
Checklist:
- [ ] My change requires a change to the documentation.
- [ ] I have updated the documentation accordingly.
- [x] I've read the CONTRIBUTION guide
- [ ] I have added tests to cover my changes.
- [x] All new and existing tests passed.
Hi @S-Bohn. Thanks for your PR.
I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: S-Bohn
To complete the pull request process, please assign elvinefendi after the PR has been reviewed.
You can assign the PR to them by writing /assign @elvinefendi in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
/ok-to-test May we have some e2e scenario here? Thanks :)
/ok-to-test May we have some e2e scenario here? Thanks :)
+1 for adding new testcases @S-Bohn
/ok-to-test May we have some e2e scenario here? Thanks :)
+1 for adding new testcases @S-Bohn
Let me try to extend grpcbin.
I anyhow noticed that there are things missing: grpc_ssl_verify also exist. So please let me go through all proxy-ssl-* annotations and check whether they need a special gRPC case.
News on this?
News on this?
I updated it with an e2e test and added the missing *_ssl_* directives. I used helper functions to determine the backend protocol. Mainly required for the global per server case, where it could happen that both grpc_* and proxy_* directives are required. Not sure whether there is a better way.
I temporarily pushed a patched grpcbin version to get the tests running - that needs to be merged upstream first.
I would propose that I create a dedicated PR for the missing configuration entries that are different for gRPC: Probably a no-brainer for grpc_buffer_size and grpc_*_timeout. But I would need to double-check for grpc_hide_header, grpc_intercept_errors, and grpc_next_upstream.
Currently, if an ingress TLS Secret is used for grpc_ssl_certificate* and the Secret is updated (cert-manager rotation), the new secret value will not be passed to the backend server.
Not sure if this pull request can/should address this issue
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
+1 would love for this to get some attention
+1 would love for this to get some attention
Yes, I need to follow up on this.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.