kubernetes-network-policy-recipes icon indicating copy to clipboard operation
kubernetes-network-policy-recipes copied to clipboard

Explain NetworkPolicy + service.type=LoadBalancer & Ingress behavior

Open ahmetb opened this issue 6 years ago • 9 comments

When I apply a network policy, Service.type=LoadBalancer restricting all pod-to-pod traffic, it keeps working for a while, and a few minutes later it stops working.

Once I remove network policy, it still keeps spinning and doesn't load in the browser (or via curl). Health checks seem fine though:

image

Repro:

  1. Run: kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80
  2. kubectl expose deploy/apiserver --type=LoadBalancer --name=apiserver-external
  3. Visit site = works
  4. Apply:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: api-allow
spec:
  podSelector:
    matchLabels:
      app: bookstore
      role: api
  ingress:
  - from:
      - podSelector:
          matchLabels:
            app: bookstore
  1. Observe still works after deploying.
  2. Wait a few mins, delete/redeploy policy without from: section.
  3. Visit on browser = stops working
  4. Delete network policy = still doesn't work, spins forever.

ahmetb avatar Aug 02 '17 16:08 ahmetb

I have the same issue It works well with the NodePort but when the service is LoadBalancer doesn't work

first I denied all traffic:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all
  namespace: nexus-test
spec:
  podSelector:

then I gave access to all pods in the same namespace and in all ports

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: nexus-test
  name: allow-same-namespace
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}
    ports:
        - {}

After this I can access to the service in pods inside the namespace nexus-test and it isn't possible the access pods out of the namespace, This is the expected behavior.

But the loadBalancer is not reachable, It says OutOfService (it seems that the deny all affects something else)

ehernandez-xk avatar Oct 05 '17 15:10 ehernandez-xk

Any luck with solving this? I am facing same issue.

sachinkl avatar Mar 28 '18 11:03 sachinkl

Same here! Any joy?

ChrisCooney avatar Oct 26 '18 23:10 ChrisCooney

Looks like a lot of people are getting stuck with this and I don't think I have the answers. :) and I think the answer might be "depends on the implementation" as the spec doesn't clearly explain this.

I don't think the spec even explains the port number of container vs the Service in front of it should be used. So I'm sorry but I don't know much to help here. Any help is appreciated.

ahmetb avatar Oct 26 '18 23:10 ahmetb

I have the same issue It works well with the NodePort but when the service is LoadBalancer doesn't work

first I denied all traffic:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all
  namespace: nexus-test
spec:
  podSelector:

then I gave access to all pods in the same namespace and in all ports

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: nexus-test
  name: allow-same-namespace
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}
    ports:
        - {}

After this I can access to the service in pods inside the namespace nexus-test and it isn't possible the access pods out of the namespace, This is the expected behavior.

But the loadBalancer is not reachable, It says OutOfService (it seems that the deny all affects something else)

Isn't the issue here that by specifying:

- podSelector: {}

in the 'from' section you are indicating you only wish to receive from pods and therefore not from external clients?

It allows all pods in namespace nexus-test to receive traffic from all pods in the same namespace on any TCP port and denies inbound traffic to all pods in namespace nexus-test from other namespaces (and IP blocks).

I wonder if instead you should use something like this:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: nexus-test
  name: allow-same-namespace
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
    - podSelector: {}
    ports:
        - {}

which will:

  • allows all pods in namespace nexus-test to receive traffic from subnet 0.0.0.0/0 on port TCP Any

  • allows all pods in namespace nexus-test to receive traffic from all pods in the same namespace on port TCP Any (denies inbound traffic to all pods in namespace nexus-test from other namespaces)


Bearing in mind that https://kubernetes.io/docs/concepts/services-networking/network-policies/ states:

"ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs."

namloc2001 avatar Dec 06 '19 16:12 namloc2001

When I apply a network policy, Service.type=LoadBalancer restricting all pod-to-pod traffic, it keeps working for a while, and a few minutes later it stops working.

Once I remove network policy, it still keeps spinning and doesn't load in the browser (or via curl). Health checks seem fine though:

image

Repro:

1. Run: `kubectl run apiserver --image=nginx --labels app=bookstore,role=api --expose --port 80`

2. `kubectl expose deploy/apiserver --type=LoadBalancer --name=apiserver-external`

3. Visit site = works

4. Apply:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: api-allow
spec:
  podSelector:
    matchLabels:
      app: bookstore
      role: api
  ingress:
  - from:
      - podSelector:
          matchLabels:
            app: bookstore
1. Observe still works after deploying.

2. Wait a few mins, delete/redeploy policy **without** `from:` section.

3. Visit on browser = stops working

4. Delete network policy = still doesn't work, spins forever.

I think the issue here be because you are not specifying a namespace in the metadata section. So when you remove:

  - from:
      - podSelector:
          matchLabels:
            app: bookstore

The rule is now a blocking rule: it denies inbound traffic to pods in the target namespace with labels app: bookstore and role: api.

Conversely, with the 'from' section as you had it specified, the rule: allows pods in the target namespace with labels app: bookstore and role: api to receive traffic from pods in the same namespace with labels app: bookstore on all ports.

However I don't believe this permits access from outside the cluster, so I think the following is required:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: api-allow
  # note the namespace is included.
  namespace: my-namespace
spec:
  podSelector:
    matchLabels:
      app: bookstore
      role: api
  ingress:
  - from:
    # specify ipBlock with 0.0.0.0/0 to permit all external traffic
    - ipBlock:
        cidr: 0.0.0.0/0  
    - podSelector:
        matchLabels:
          app: bookstore

Which means the rule: Allows pods in namespace my-namespace with labels app: bookstore and role: api to receive traffic from subnet 0.0.0.0/0 on all ports. And allows pods in namespace my-namespace with labels app: bookstore and role: api to receive traffic from pods in the same namespace with labels app: bookstore on all ports.

namloc2001 avatar Dec 06 '19 17:12 namloc2001

I have the same issue It works well with the NodePort but when the service is LoadBalancer doesn't work first I denied all traffic:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: deny-all
  namespace: nexus-test
spec:
  podSelector:

then I gave access to all pods in the same namespace and in all ports

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: nexus-test
  name: allow-same-namespace
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - podSelector: {}
    ports:
        - {}

After this I can access to the service in pods inside the namespace nexus-test and it isn't possible the access pods out of the namespace, This is the expected behavior. But the loadBalancer is not reachable, It says OutOfService (it seems that the deny all affects something else)

Isn't the issue here that by specifying:

- podSelector: {}

in the 'from' section you are indicating you only wish to receive from pods and therefore not from external clients?

It allows all pods in namespace nexus-test to receive traffic from all pods in the same namespace on any TCP port and denies inbound traffic to all pods in namespace nexus-test from other namespaces (and IP blocks).

I wonder if instead you should use something like this:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  namespace: nexus-test
  name: allow-same-namespace
spec:
  podSelector:
    matchLabels:
  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
    - podSelector: {}
    ports:
        - {}

which will:

* allows all pods in namespace nexus-test to receive traffic from subnet 0.0.0.0/0 on port TCP Any

* allows all pods in namespace nexus-test to receive traffic from all pods in the same namespace on port TCP Any (denies inbound traffic to all pods in namespace nexus-test from other namespaces)

Bearing in mind that https://kubernetes.io/docs/concepts/services-networking/network-policies/ states:

"ipBlock: This selects particular IP CIDR ranges to allow as ingress sources or egress destinations. These should be cluster-external IPs."

But this method would allow traffic from all namespace since we have specified 0.0.0.0/0

pratheesh-new avatar Jul 06 '20 11:07 pratheesh-new

I have the same issue Have any solution now

pengj666 avatar Jun 19 '22 08:06 pengj666

But this method would allow traffic from all namespace since we have specified 0.0.0.0/0

I thought so too, however it doesn't appear to be the case to me (GKE with DPv2). Specifying 0.0.0.0/0 allows external traffic over the LoadBalancer, but not traffic from Pods in the cluster unless you add a separate podSelector rule to cover them.

If I specify:

  ingress:
  - from:
    - ipBlock:
        cidr: '0.0.0.0/0'

This is enough to allow external traffic, without also directly allowing traffic from other Pods in the cluster. Not sure if this is a bug, or intentional behavior.

Since the Service of type Loadbalancer is routing via the Node, the other way if you exclude the internal network from 0.0.0.0/0 is to specifically allow the Node CIDR range, like so:

  ingress:
  - from:
    # Allow internet traffic (but not internal traffic)
    - ipBlock:
        cidr: '0.0.0.0/0'
        except: ['10.0.0.0/8']
    # Allow traffic from nodes
    - ipBlock:
        cidr: '10.138.0.0/20' # cluster network subnet primary range

But again, this actually doesn't seem needed in my testing.

WilliamDenniss avatar Sep 17 '23 01:09 WilliamDenniss