aws-load-balancer-controller
aws-load-balancer-controller copied to clipboard
helm: Add network policy to aws-load-balancer-controller
Issue
N/A
Description
Deploys NetworkPolicy resource if the configuration is enabled
I am happy to change the default rules that get created - I found that when I deployed this it was a minimum that I needed when there was a default deny all network policy enabled for the cluster.
Checklist
~- [ ] Added tests that cover your change (if possible)~
- [x] Added/modified documentation as required (such as the
README.md
, or thedocs
directory) - [x] Manually tested
- [x] Made sure the title of the PR is a good description that can go into the release notes
BONUS POINTS checklist: complete for good vibes and maybe prizes?! :exploding_head:
~- [ ] Backfilled missing tests for code in same general area :tada:~ ~- [ ] Refactored something and made the world a better place :star2:~
Hi @jdheyburn. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Codecov Report
Base: 54.07% // Head: 54.07% // No change to project coverage :thumbsup:
Coverage data is based on head (
a8d06ee
) compared to base (a92e689
). Patch has no changes to coverable lines.
Additional details and impacted files
@@ Coverage Diff @@
## main #2645 +/- ##
=======================================
Coverage 54.07% 54.07%
=======================================
Files 144 144
Lines 8301 8301
=======================================
Hits 4489 4489
Misses 3484 3484
Partials 328 328
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jdheyburn
Once this PR has been reviewed and has the lgtm label, please assign m00nf1sh for approval by writing /assign @m00nf1sh
in a comment. For more information see:The Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
@kishorj Is it possible to get an ok-to-test on this? Thanks!
@jdheyburn: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@kishorj @M00nF1sh , is it possible for a review on this when you are available? Thanks!
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
Doesn't it need egress to the AWS API? How is that traffic permitted?
@johngmyers Usually on clusters where you restrict network access (via deny all rule) you would permit external traffic on port 443 everywhere before that rule - or you might not, in which case you would add a rule here for the same.
I don't want to open up 443 by default, I'll leave that up to the user who has context on their setup.
Hope that answers the question :)
The description of networkPolicy.enabled
states "If true
, creates a NetworkPolicy with minimal connectivity opened". But it doesn't open minimal connectivity; it opens less than minimal connectivity.
At the very least, the description needs to match the implementation.
Now I consider a network policy that allows everything unrestricted egress to any port on the internet to be a strange thing.
I suppose a NetworkPolicy could be tighter than allow-all-internet-egress if it knew the IPs of the relevant Endpoints. One could perhaps default egressRules
to an allow-all-internet one, so it would not take effect if egressRules
were explicitly specified.
Thanks for the feedback!
In the PR I have the following which explains what is considered minimal default connectivity. I can update the description to include that connectivity to AWS is required and that it is up to the user to add this.
By default the following connectivity is opened:
- ingress/egress other aws-load-balancer-controller pods with matching selector labels
- ingress to pods on .Values.webhookBindPort
I am not advocating for unrestricted egress to any port. I mention opening egress on port 443 as an example, but it would be up to the user to decide how they want to open connectivity egress. An opinionated and rigid network policy should not make that decision for them.
Given that IP addresses for endpoints can change, I would not want them baked into a helm-chart.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: jdheyburn Once this PR has been reviewed and has the lgtm label, please assign m00nf1sh for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
@kishorj @M00nF1sh Please could you review? Thank you in advance!
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closedYou can:
- Reopen this PR with
/reopen
- Mark this PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.