Fail with NetworkPolicy enabled
What happened:
When installing ingress into cluster with NetworkPolicies enabled using helm chart by default kubernetes API is not able to reach into validating webhook controller:
kubectl --context test.aks -n test apply -f ~/Downloads/ingress.yaml
networkpolicy.networking.k8s.io/prometheus-kube-prometheus-prometheus-ingress unchanged
Error from server (InternalError): error when creating "/home/mjudeikis/Downloads/ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-contro
ller-admission.ingress.svc:443/networking/v1/ingresses?timeout=10s": EOF
where ingress.yaml:
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: prometheus-kube-prometheus-prometheus-ingress
namespace: test
spec:
ingress:
- ports:
- protocol: TCP
port: 443
from:
- namespaceSelector:
matchLabels:
name: ingress
podSelector: {}
policyTypes:
- Ingress
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: prometheus-kube-prometheus-prometheus-ingress
namespace: test
annotations:
cert-manager.io/cluster-issuer: "prod-le-dns01"
spec:
ingressClassName: external
rules:
- host: prometheus-kube-prometheus-prometheus.test.test-aks.mjdev.dnstest-az.wf.appvia.dev
http:
paths:
- backend:
service:
name: prometheus-kube-prometheus-prometheus
port:
number: 4180
path: /
pathType: Prefix
tls:
- hosts:
- prometheus-kube-prometheus-prometheus.test.test-aks.mjdev.dnstest-az.wf.appvia.dev
secretName: prometheus-kube-prometheus-prometheus-ingress-tls
This is because kubernetes API can't talk with ingress controller (if not deployed into namespace with pre-existing NetworkPolicies)
This can be fixed if we deploy into Ingress namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: networkpolicy
namespace: ingress
spec:
ingress:
- {}
podSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
policyTypes:
- Ingress
Helm chart could support something like networkPolicesEnabled: true to deploy required rules for basic usage.
What you expected to happen:
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
Kubernetes version (use kubectl version):
Bit older version but this does not change the fact for the root issue
[mjudeikis@unknown wayfinder]$ kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: v1.2.1
Build: 08848d69e0c83992c89da18e70ea708752f21d7a
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.10
-------------------------------------------------------------------------------
Environment:
-
Cloud provider or hardware configuration:
-
OS (e.g. from /etc/os-release):
-
Kernel (e.g.
uname -a): -
Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
-
Basic cluster related info:
kubectl versionkubectl get nodes -o wide
-
How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress - If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename> - If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
- if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances
- If helm was used then please show output of
-
Current State of the controller:
kubectl describe ingressclasseskubectl -n <ingresscontrollernamespace> get all -A -o widekubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
-
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o widekubectl -n <appnamespace> describe ing <ingressname>- If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
-
Others:
- Any other related information like ;
- copy/paste of the snippet (if applicable)
kubectl describe ...of any custom configmap(s) created and in use- Any other related information that may help
- Any other related information like ;
How to reproduce this issue:
Anything else we need to know:
Happy to contribute this in, if we can accepts this is something we want to support (I think we should as its prevents out-of-the-box usage)
/remove-kind bug /kind feature
@mjudeikis, I think this should be more documentation than a feature in the helm chart.
I could be persuaded that this should be a feature, but we also have to support it in the static deployments.
I also don't know if we should have tests for it or how we would; If all the e2e still pass, it would be good enough.
@rikatz @tao12345666333 thoughts? It should be documented to allow all traffic to ingress-nginx
/kind feature /priority important-longterm /triage accepted
We do document it here https://kubernetes.github.io/ingress-nginx/deploy/#webhook-network-access
Probably get do more, with a network policy page itself.
My challenge with this being only documentation is that helm charts can't configure ingress to running state without exteral actions. So if you use helm chart to bootstrap and manage your environments, you need to forkit/wrapp it with some other actions to create NetworkPolicy.
I don't think it should be part of the ingress code, but chart for sure.
Would something like this ^^ be acceptable? Simple, disabled by default?