nfs-subdir-external-provisioner
nfs-subdir-external-provisioner copied to clipboard
NetworkPolicies are not considered
I've configure a "deny all but dns" network policy
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: only-allow-dns
spec:
podSelector:
matchLabels: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
and about to enable required network traffic within, into and out of the cluster step by step. However, I've explicitely not yet enabled nfs traffic between the nfs-provisioner pod and the nfs server. Nevertheless, all nfs-client pvc are working and I can see files created in mounts of pods appearing at the nfs server. I can also observe traffic on 2049/tcp on the physical interface of the node where the nfs-provisioner pod is running on.
I've expected, that this traffic is blocked due to the above policies (no others are yet in place), but it isn't.
Is that the behavior you are expecting?
Thank you very much!
P.S.: I've installed the nfs-provisioner this way:
#!/bin/bash
# kubectl -f network-policies.yml -n system apply
helm repo add nfs-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
helm repo update
helm upgrade --install -f values.yml nfs-provisioner nfs-provisioner/nfs-subdir-external-provisioner --version 4.0.2 \
--namespace=system
kubectl annotate storageclass local-path storageclass.kubernetes.io/is-default-class-
kubectl annotate storageclass nfs-client storageclass.kubernetes.io/is-default-class=true
with values.yml
:
nfs:
server: 172.16.21.19
path: /volume1/nfs-storage
Hi @wollud1969 - Just from my experience: A network policy alone does not deny traffic nor is the deployment of the application responsible. You need to have a proper network stack that takes care of your policies and blocks the traffic.
Hi Arne, thank you for your response. Yes, I know and that network stack is in place. NetworkPolicies in common (access from one pod to another, access from the world to traefik) are working properly. It is just the communication to the nfs server which is not limited. Cheers, Wolfgang Am 06.10.2023 um 00:21 schrieb Arne Moor @.***>: Hi @wollud1969 - Just from my experience: A network policy alone does not deny traffic nor is the deployment of the application responsible. You need to have a proper network stack that takes care of your policies and blocks the traffic.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.