calico
calico copied to clipboard
Blocked access from pods in the host network namespace
We're using Calico 3.1.1 as our NPC along with flannel+wireguard for our basic networking.
We have an issue where one of our teams makes use of kubectl proxy
as part of their deployment process.
They can add a policy the allows traffic from the kube-system
namespace, and I can see calico-nod does what makes sense on the node. There are ipset entries for each pod in
kube-system`, including for the kube-apiservers, which run in the host network namespace, and are therefore included with the host IP from the primary NIC.
Unfortunately, traffic from pods in the host network namespace instead enters via the flannel.wg
interface, and so appears instead to come from the IP address on this interface (always 10.1.x.0
in our case).
I can add cidrBlock
rules for every possible 10.1.x.0/32
address, but this feels like quite a kludge.
Is this something that could be addressed by Calico?
Expected Behavior
Create a new network policy to a service in the default
namespace which allows access from kube-system
namespace (where kube-apiserver
lives). When connecting to the cluster using kubectl proxy
, the service in the default
namespace is accessible.
Current Behavior
The service in default
namespace is not accessible.
Possible Solution
For any host networked pods, perhaps Calico could also add network policy rules which match the overlay network interface?
Steps to Reproduce (for bugs)
- Using Canal (Flannel 0.10.0 + wireguard for networking, Calico 3.1.1 as NPC), create a new network policy to a service in the
default
namespace. Allow access to this service fromkube-system
namespace, wherekube-apiserver
resides:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: admin-console
namespace: default
labels:
app: admin-console
spec:
podSelector:
matchLabels:
app: admin-console
ingress:
- from:
- namespaceSelector:
matchLabels:
name: kube-system
- Run
kubectl proxy
. - Try to connect to the service:
% curl --max-time 5 --verbose http://127.0.0.1:8001/api/v1/namespaces/default/services/http:admin-console:80/proxy/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
> GET /api/v1/namespaces/default/services/http:admin-console:80/proxy/ HTTP/1.1
> Host: 127.0.0.1:8001
> User-Agent: curl/7.54.0
> Accept: */*
>
* Operation timed out after 5004 milliseconds with 0 bytes received
* Closing connection 0
curl: (28) Operation timed out afteI0518 14:27:21.675696 5602 logs.go:49] http: proxy error: context canceled
r 5004 milliseconds with 0 bytes received
Context
One of our development teams uses proxy access as part of their deployment process and this blocks their work in implementing network policies.
Your Environment
- Calico version: 3.1.1
- Orchestrator version (e.g. kubernetes, mesos, rkt): Kubernetes 1.8.12
- Operating System and version: Container Linux by CoreOS stable (1688.5.3)
- Link to your project (optional):